Microsoft Word - Article 5 - 196-1084-1-PB.docx ACTA IMEKO May 2014, Volume 3, Number 1, 16 – 18 www.imeko.org ACTA IMEKO | www.imeko.org May 2014 | Volume 3 | Number 1 | 16 The measurement chain and validation of experimental measurements R.J. Moffat Department of Mechanical Engineering, Stanford University, Stanford, California 94305, United States of America Section: RESEARCH PAPER Keywords: measurement chain, uncertainty propagation Citation: R.J. Moffat, The measurement chain and validation of experimental measurements, Acta IMEKO, vol. 3, no. 1, article 5, May 2014, identifier: IMEKO-ACTA-03 (2014)-01-05 Editor: Luca Mari, Università Carlo Cattaneo Received May 1st, 2014; In final form May 1st, 2014; Published May 2014 Copyright: © 2014 IMEKO. This is an open-access article distributed under the terms of the Creative Commons Attribution 3.0 License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited An experienced human operator can often recognize an anomalous combination of data even upon its first occurrence, and may save an apparatus from a serious failure by this “intuition”. The human operators ability to identify trends and anomalies is based not only upon the instantaneous value of a measurand but upon its context (the other associated measurands including prior values). Decisions are made based on groups of data, some of which may even be non-quantitative (i.e. sound, smell, color etc.). The advent of automated data acquisition and computer control opens the door to this same type of decision making by a control system. Certain requirements must be met: (1) the measured data must accurately reflect the state of the system (2) the computer must “know what to expect” over the range of normal operation and, (3) the computer must be able to distinguish between allowable deviations due to experimental uncertainty and deviations which signify trouble. These are the same problems which face an experimental research program and it seems likely that the nomenclature and methodology developed for research experiments will be helpful in discussing measurements for computer-aided control. This paper presents three ideas found useful in planning experimental programs: (1) the nomenclature of the measurement chain, (2) the data reduction program considered as a mathematical model of the real system and, (3) the use of uncertainty analysis to predict the allowable scatter in an experimental result. The process of finding the numerical value of a measurand is illustrated in Figure 1. Five potentially different values exist for each measurement. The various terms will be illustrated in terms of a hypothetical experiment: determining the exhaust gas temperature of a small engine. The Principal Measurand, in this example, is Temperature; all other descriptors of the system are Peripheral Measurands. The Figure 1. The measurement chain. This is a reissue of a paper which appeared in ACTA IMEKO 1973, Proceedings of the 6th Congress of the International Measurement Confederation, “Measurement and instrumentation”, 17-23.6.1973, Dresden, vol. 1, pp. 45–53. The paper witnesses the sophisticated discussion that, well before the publication of the Guide to the Expression of Uncertainty in Measurement (GUM), was active in the measurement science community around the subject of error and uncertainty, and its consequences on the structure of the measuring process and the way it is performed. ACTA IMEKO | www.imeko.org May 2014 | Volume 3 | Number 1 | 17 Real Value of the principal measurand is the value the measurand would have if the system were not affected by the measurement process. In the present example, the Real Value would be the temperature of the exhaust in an uninstrumented engine, running at some stated speed and load. The Available Value is the value of the measurand in the system, at the location of the sensor, while the measurement is being taken. There will always be some difference between the Available Value and the Real Value, though It may be small, since it is impossible to change the state of the sensor without also changing the state of the system. In addition, the presence of the sensor may cause the system to move to a new operating point, resulting in a still further change in the value of the measurand. The Available Value is the one to which the sensor is exposed: a “perfect sensor” would equilibrate at the Available Value. In the example, the presence of the temperature sensor in the exhaust duct will raise the engine back-pressure, requiring a slight increase in fuel flow to maintain the same nominal speed and load. This will result in an increase in the exhaust gas temperature. The Available Value will be higher than the Real Value for this case. The Achieved Value is the value the measurand has in the sensor, while the measurement is being made. If the calibration of the sensor were perfectly known, then this is the value which would be measured. Many sensors, and particularly thermal sensors, respond to more than one aspect of their surroundings. These system/sensor interactions cause the sensor to equilibrate with its entire environment rather than just the principal measurand and gives rise to what is known as “Environmental Error”. In the present example the temperature sensor is subject to radiation error, conduction error, and velocity error. Thus the temperature level in the sensor (the Achieved Value) will be lower than the temperature of the gas stream at the sensor location (the Available Value) due to system/sensor interaction. The difference will depend upon the velocity and composition of the exhaust gases as well as the materials and temperatures of the surrounding duct work and hardware. The Measured Value is the value which is attributed to the measurand when the output of the sensor is interpreted using the best estimate of the calibration of the sensor. If the calibration were without error the Measured Value would be equal to the Achieved Value. If the calibration of the sensor is affected by the conditions of use in a manner which is not known to the user, then the Measured Value will be different from the Achieved Value. In the present example assume the temperature sensor to be a thermocouple whose elements are exposed directly to the gas streams. After a period of time there may be sufficient chemical reaction with the exhaust gases to cause a change in the calibration of the wire. Use of standard tables of EMF– Temperature on such a thermocouple would result in Measured Values which might be significantly different from Achieved Values. The Corrected Value is the engineer’s best estimate of the Real Value, accounting for all of the recognized sources of error: system disturbance, system/sensor interactions, and calibration change. In order that experimental data properly describe the state of a system, the Corrected Values must be acceptably close to the Real Values. Recognition of the many ways in which unwanted effects can enter a measuring chain is important in devising systems which return valid measurements. Too often, principal emphasis is placed on the calibration of the sensor (the link between the Achieved Value and the Measured Value). The state of the instrumentation art is so well advanced now that, in general, the principal remaining difficulties are caused by the system/sensor interactions. Errors due to system/sensor interactions can be controlled by either of two techniques: (1) design of a system which minimizes system disturbance and system/sensor interactions or, (2), use of a data processing program prior to the control mode which applies the required corrections. The data reduction program and the apparatus must be considered together. The data reduction program may require peripheral data to be gathered (i.e. wall temperature, gas velocity etc.) in order to properly correct the control data. This relationship is illustrated in Figure 2. Consider a general question “What is the value of ?” A test apparatus and its associated data program together must account for all system disturbances and system sensor interactions with a minimum of uncertainty. Large corrections tend to be uncertain, hence the system should be designed to minimize the disturbances and interactions. Whatever cannot be accomplished by the system design must be done by the data processor. If the combination is properly matched then the corrected value will be independent of the peripheral effects: they will be suppressed by the system and corrected for by the program. Only significant information will be passed to the control block. If, for instance, the temperature of the duct walls decreased due to a drop in ambient air temperature, the increased radiation error would cause the Measured Value of temperature to go down, even though the Available Value of temperature remained constant. A properly written data processing program would, however, return the same Corrected Value, since it would properly compute the new radiation correction. One further problem remains: tolerance on the set point. There is an uncertainty in any physical measurement and a result compute from several measurements is affected by the uncertainty in each of its inputs. It is desirable to be able to anticipate the uncertainty interval associated with a computed result, R, which results from the recognized uncertainty in each of its inputs. This describes the interval within which the computed result must lie as a result of purely random variations of each of its input variables. The uncertainty interval represents a “tolerance” on the computed result. For purposes of uncertainty analysis, a single measurement can be regarded as bearing the following information: x x δx (20/1) (1) where: x is the most probable mean value of x which would be Figure 2. The experimental loop. ACTA IMEKO | www.imeko.org May 2014 | Volume 3 | Number 1 | 18 observed if it were measured many times x is the presently recorded value of one measurement of x δx is the interval within which the most probable mean is felt to lie (20/1) are the “odds” which the experimenter believes apply to the preceding statement: i.e., a measure of confidence. One frequent technique for estimating the uncertainty in a computed result is that of Kline and McClintock [1] which propagates the uncertainty at constant probability. Consider a result, R, computed from several variables, the x , where: (1) each x is independent and (2) each x displays a Gaussian distribution of uncertainty. The uncertainty in the result is then given by: δR ∂R∂x1 δx1 2 ∂R∂x2 δx2 2 … ∂R∂xn δxn 2 / (2) The computing equation R=R(x1, x2, x3, ..., xn) is the data reduction equation by which R is calculated from its inputs, x . The various partial derivatives usually have different values in different parts of the operating range. The uncertainty in the result R is, therefore, governed sometimes by one variable and sometimes by another. Active computer control permits the use of “variable tolerances” which are consistent with the physical laws governing the uncertainty. The principal problem which arises is: What value of δx should be used? The answer depends upon the use to which the final result will be put. A general answer is shown in Figure 3. If an uncertainty calculation is being made in order to plan a system then the only component of δx would be the “resolution” of the sensor or the ability to interpolate data from its output (Zeroth Order Uncertainty). Any real system tends to have small disturbances which vary randomly with time (a timewise “Jitter” or unsteadiness) and different sensors have different dynamic characteristics and may introduce different phase shifts into their outputs when exposed to the same process stream. One way to deal with this is to treat the unsteadiness as an uncertainty and add its effect to that of the interpolation problem (First Order Uncertainty). If the final result is to be used in such a way that the absolute level would be important (for example by subtracting two computed results to determine a difference) then the uncertainties in the calibrations must be included (Nth Order Uncertainty). With the use of uncertainty propagation it becomes possible to set floating limits on the control variables to account for the changing sensitivity of the process to its variables. SUMMARY In many respects, the advent of computer based control brings closer together the areas of measurement for research and measurement for control. If computer control is carried to its logical end, the control function should be preceded by a data reduction program which corrects for the disturbance effect of the sensor, and all of the recognized interactions between the system and the sensors. A data reduction program which completely models the behavior of the system will return correct measurement data to the control unit, regardless of the peripheral conditions on the system. Development of the data reduction program should complement and accompany the development of the hardware system. Uncertainties in the measured data will cause uncertainties in the computed result. This requires establishment of “tolerances” on the control parameters. Uncertainty analysis techniques based on constant probability propagation provide a rational basis for establishing limits for acceptable excursions. REFERENCES [1] S.J. Kline, F.A. McClintock, “Describing Uncertainties in Single Sample Experiments” Mechanical Engineering, January, 1953. Figure 3. The levels of uncertainty analysis.