Measurement error and uncertainty

Uncertainty and Errors: ✓ Formula ✓ Error Propagation ✓ Uncertainties ✓ StudySmarter Original

When we measure a property such as length, weight, or time, we can introduce errors in our results. Errors, which produce a difference between the real value and the one we measured, are the outcome of something going wrong in the measuring process.

The reasons behind errors can be the instruments used, the people reading the values, or the system used to measure them.

If, for instance, a thermometer with an incorrect scale registers one additional degree every time we use it to measure the temperature, we will always get a measurement that is out by that one degree.

Because of the difference between the real value and the measured one, a degree of uncertainty will pertain to our measurements. Thus, when we measure an object whose actual value we dont know while working with an instrument that produces errors, the actual value exists in an uncertainty range.

The difference between uncertainty and error

The main difference between errors and uncertainties is that an error is the difference between the actual value and the measured value, while an uncertainty is an estimate of the range between them, representing the reliability of the measurement. In this case, the absolute uncertainty will be the difference between the larger value and the smaller one.

A simple example is the value of a constant. Lets say we measure the resistance of a material. The measured values will never be the same because the resistance measurements vary. We know there is an accepted value of 3.4 ohms, and by measuring the resistance twice, we obtain the results 3.35 and 3.41 ohms.

Errors produced the values of 3.35 and 3.41, while the range between 3.35 to 3.41 is the uncertainty range.

Lets take another example, in this case, measuring the gravitational constant in a laboratory.

The standard gravity acceleration is 9.81 m/s^2. In the laboratory, conducting some experiments using a pendulum, we obtain four values for g: 9.76 m/s^2, 9.6 m/s^2, 9.89m/s^2, and 9.9m/s^2. The variation in values is the product of errors. The mean value is 9.78m/s^2.

The uncertainty range for the measurements reaches from 9.6 m/s^2, to 9.9 m/s^2 while the absolute uncertainty is approximately equal to half of our range, which is equal to the difference between the maximum and minimum values divided by two.

The absolute uncertainty is reported as:

In this case, it will be:

What is the standard error in the mean?

The standard error in the mean is the value that tells us how much error we have in our measurements against the mean value. To do this, we need to take the following steps:

  1. Calculate the mean of all measurements.
  2. Subtract the mean from each measured value and square the results.
  3. Add up all subtracted values.
  4. Divide the result by the square root of the total number of measurements taken.

Lets look at an example.

You have measured the weight of an object four times. The object is known to weigh exactly 3.0kg with a precision of below one gram. Your four measurements give you 3.001 kg, 2.997 kg, 3.003 kg, and 3.002 kg. Obtain the error in the mean value.

First, we calculate the mean:

As the measurements have only three significant figures after the decimal point, we take the value as 3.000 kg. Now we need to subtract the mean from each value and square the result:

Again, the value is so small, and we are only taking three significant figures after the decimal point, so we consider the first value to be 0. Now we proceed with the other differences:

All our results are 0 as we only take three significant figures after the decimal point. When we divide this between the root square of the samples, which is √4, we get:

In this case, the standard error of the mean (σx) is almost nothing.

What are calibration and tolerance?

Tolerance is the range between the maximum and minimum allowed values for a measurement. Calibration is the process of tuning a measuring instrument so that all measurements fall within the tolerance range.

To calibrate an instrument, its results are compared against other instruments with higher precision and accuracy or against an object whose value has very high precision.

One example is the calibration of a scale.

To calibrate a scale, you must measure a weight that is known to have an approximate value. Lets say you use a mass of one kilogram with a possible error of 1 gram. The tolerance is the range 1.002kg to 0.998kg. The scale consistently gives a measure of 1.01kg. The measured weight is above the known value by 8 grams and also above the tolerance range. The scale does not pass the calibration test if you want to measure weights with high precision.

How is uncertainty reported?

When doing measurements, uncertainty needs to be reported. It helps those reading the results to know the potential variation. To do this, the uncertainty range is added after the symbol ±.

Lets say we measure a resistance value of 4.5ohms with an uncertainty of 0.1ohms. The reported value with its uncertainty is 4.5 ± 0.1 ohms.

We find uncertainty values in many processes, from fabrication to design and architecture to mechanics and medicine.

What are absolute and relative errors?

Errors in measurements are either absolute or relative. Absolute errors describe the difference from the expected value. Relative errors measure how much difference there is between the absolute error and the true value.

Absolute error

Absolute error is the difference between the expected value and the measured one. If we take several measurements of a value, we will obtain several errors. A simple example is measuring the velocity of an object.

Lets say we know that a ball moving across the floor has a velocity of 1.4m/s. We measure the velocity by calculating the time it takes for the ball to move from one point to another using a stopwatch, which gives us a result of 1.42m/s.

The absolute error of your measurement is 1.42 minus 1.4.

Relative error

Relative error compares the measurement magnitudes. It shows us that the difference between the values can be large, but it is small compared to the magnitude of the values. Lets take an example of absolute error and see its value compared to the relative error.

You use a stopwatch to measure a ball moving across the floor with a velocity of 1.4m/s. You calculate how long it takes for the ball to cover a certain distance and divide the length by the time, obtaining a value of 1.42m/s.

As you can see, the relative error is smaller than the absolute error because the difference is small compared to the velocity.

Another example of the difference in scale is an error in a satellite image. If the image error has a value of 10 metres, this is large on a human scale. However, if the image measures 10 kilometres height by 10 kilometres width, an error of 10 metres is small.

The relative error can also be reported as a percentage after multiplying by 100 and adding the percentage symbol %.

Plotting uncertainties and errors

Uncertainties are plotted as bars in graphs and charts. The bars extend from the measured value to the maximum and minimum possible value. The range between the maximum and the minimum value is the uncertainty range. See the following example of uncertainty bars:

Uncertainty and Error in Measurements. Plot showing uncertainties. StudySmarterFigure 1. Plot showing the mean value points of each measurement. The bars extending from each point indicate how much the data can vary. Source: Manuel R. Camacho, StudySmarter.

See the following example using several measurements:

You carry out four measurements of the velocity of a ball moving 10 metres whose speed is decreasing as it advances. You mark 1-metre divisions, using a stopwatch to measure the time it takes for the ball to move between them.

You know that your reaction to the stopwatch is around 0.2m/s. Measuring the time with the stopwatch and dividing by the distance, you obtain values equal to 1.4m/s, 1.22m/s, 1.15m/s, and 1.01m/s.

Because the reaction to the stopwatch is delayed, producing an uncertainty of 0.2m/s, your results are 1.4 ± 0.2 m/s, 1.22 ± 0.2 m/s, 1.15 ± 0.2 m/s, and 1.01 ± 0.2m/s.

The plot of the results can be reported as follows:

Uncertainty and Error in Measurements. Plot showing uncertainties. StudySmarterFigure 2. The plot shows an approximate representation. The dots represent the actual values of 1.4m/s, 1.22m/s, 1.15m/s, and 1.01m/s. The bars represent the uncertainty of ±0.2m/s. Source: Manuel R. Camacho, StudySmarter.

How are uncertainties and errors propagated?

Each measurement has errors and uncertainties. When we carry out operations with values taken from measurements, we add these uncertainties to every calculation. The processes by which uncertainties and errors change our calculations are called uncertainty propagation and error propagation, and they produce a deviation from the actual data or data deviation.

There are two approaches here:

  1. If we are using percentage error, we need to calculate the percentage error of each value used in our calculations and then add them together.
  2. If we want to know how uncertainties propagate through the calculations, we need to make our calculations using our values with and without the uncertainties.

The difference is the uncertainty propagation in our results.

See the following examples:

Lets say you measure gravity acceleration as 9.91 m/s^2, and you know that your value has an uncertainty of ± 0.1 m/s^2.

You want to calculate the force produced by a falling object. The object has a mass of 2kg with an uncertainty of 1 gram or 2 ± 0.001 kg.

To calculate the propagation using percentage error, we need to calculate the error of the measurements. We calculate the relative error for 9.91 m/s^2 with a deviation of (0.1 + 9.81) m/s^2.

Multiplying by 100 and adding the percentage symbol, we get 1%. If we then learn that the mass of 2kg has an uncertainty of 1 gram, we calculate the percentage error for this, too, getting a value of 0.05%.

To determine the percentage error propagation, we add together both errors.

To calculate the uncertainty propagation, we need to calculate the force as F = m * g. If we calculate the force without the uncertainty, we obtain the expected value.

Now we calculate the value with the uncertainties. Here, both uncertainties have the same upper and lower limits ± 1g and ± 0.1 m/s2.

We can round this number to two significant digits as 19.83 Newtons. Now We subtract both results.

The result is expressed as expected value ± uncertainty value.

If we use values with uncertainties and errors, we need to report this in our results.

Reporting uncertainties

To report a result with uncertainties, we use the calculated value followed by the uncertainty. We can choose to put the quantity inside a parenthesis. Here is an example of how to report uncertainties.

We measure a force, and according to our results, the force has an uncertainty of 0.21 Newtons.

Our result is 19.62 Newtons, which has a possible variation of plus or minus 0.21 Newtons.

Propagation of uncertainties

See the following general rules on how uncertainties propagate and how to calculate uncertainties. For any propagation of uncertainty, values must have the same units.

Addition and subtraction: if values are being added or subtracted, the total value of the uncertainty is the result of the addition or subtraction of the uncertainty values. If we have measurements (A ± a) and (B ± b), the result of adding them is A + B with a total uncertainty (± a) + (± b).

Lets say we are adding two pieces of metal with lengths of 1.3m and 1.2m. The uncertainties are ± 0.05m and ± 0.01m. The total value after adding them is 1.5m with an uncertainty of ± (0.05m + 0.01m) = ± 0.06m.

Multiplication by an exact number: the total uncertainty value is calculated by multiplying the uncertainty by the exact number.

Lets say we are calculating the area of a circle, knowing the area is equal to A = 2 * 3.1415 • r. We calculate the radius as r = 1 ± 0.1m. The uncertainty is 2 • 3.1415•1 ± 0.1m, giving us an uncertainty value of 0.6283m.

Division by an exact number: the procedure is the same as in multiplication. In this case, we divide the uncertainty by the exact value to obtain the total uncertainty.

If we have a length of 1.2m with an uncertainty of ± 0.03m and divide this by 5, the uncertainty is ± 0.03 / 5 or ±0.006.

Data deviation

We can also calculate the deviation of data produced by the uncertainty after we make calculations using the data. The data deviation changes if we add, subtract, multiply, or divide the values. Data deviation uses the symbol δ.

  • Data deviation after subtraction or addition: to calculate the deviation of the results, we need to calculate the square root of the squared uncertainties:

  • Data deviation after multiplication or division: to calculate the data deviation of several measurements, we need the uncertaintyreal value ratio and then calculate the square root of the squared terms. See this example using measurements A ± a and B ± b:

If we have more than two values, we need to add more terms.

  • Data deviation if exponents are involved: we need to multiply the exponent by the uncertainty and then apply the multiplication and division formula. If we have y = (A ± a) 2 * (B ± b) 3, the deviation will be:

If we have more than two values, we need to add more terms.

Rounding numbers

When errors and uncertainties are either very small or very large, it is convenient to remove terms if they do not alter our results. When we round numbers, we can round up or down.

Measuring the value of the gravity constant on earth, our value is 9.81 m/s^2, and we have an uncertainty of ± 0.10003m/s^2. The value after the decimal point varies our measurement by 0.1m/s^2; However, the last value of 0.0003 has a magnitude so small that its effect would be barely noticeable. We can, therefore, round up by removing everything after 0.1.

Rounding integers and decimals

To round numbers, we need to decide what values are important depending on the magnitude of the data.

There are two options when rounding numbers, rounding up or down. The option we choose depends on the number after the digit we think is the lowest value that is important for our measurements.

  • Rounding up: we eliminate the numbers that we think are not necessary. A simple example is rounding up 3.25 to 3.3.
  • Rounding down: again, we eliminate the numbers that we think are not necessary. An example is rounding down 76.24 to 76.2.
  • The rule when rounding up and down: as a general rule, when a number ends in any digit between 1 and 5, it will be rounded down. If the digit ends between 5 and 9, it will be rounded up, while 5 is also always rounded up. For instance, 3.16 and 3.15 become 3.2, while 3.14 becomes 3.1.

By looking at the question, you can often deduce how many decimal places (or significant figures) are needed. Lets say you are given a plot with numbers that have only two decimal places. You would then also be expected to include two decimal places in your answers.

Round quantities with uncertainties and errors

When we have measurements with errors and uncertainties, the values with higher errors and uncertainties set the total uncertainty and error values. Another approach is required when the question asks for a certain number of decimals.

Lets say we have two values (9.3 ± 0.4) and (10.2 ± 0.14). If we add both values, we also need to add their uncertainties. The addition of both values gives us the total uncertainty as | 0.4 | + | 0.14 | or ± 0.54. Rounding 0.54 to the nearest integer gives us 0.5 as 0.54 is closer to 0.5 than to 0.6.

Therefore, the result of adding both numbers and their uncertainties and rounding the results is 19.5 ± 0.5m.

Lets say you are given two values to multiply, and both have uncertainties. You are asked to calculate the total error propagated. The quantities are A = 3.4 ± 0.01 and B = 5.6 ± 0.1. The question asks you to calculate the error propagated up to one decimal place.

First, you calculate the percentage error of both:

The total error is 0.29% + 1.78% or 2.07%.

You have been asked to approximate only to one decimal place. The result can vary depending on whether you only take the first decimal or whether you round up this number.

Uncertainty and Error in Measurements — Key takeaways

  • Uncertainties and errors introduce variations in measurements and their calculations.
  • Uncertainties are reported so that users can know how much the measured value can vary.
  • There are two types of errors, absolute errors and relative errors. An absolute error is the difference between the expected value and the measured one. A relative error is the comparison between the measured and the expected values.
  • Errors and uncertainties propagate when we make calculations with data that has errors or uncertainties.
  • When we use data with uncertainties or errors, the data with the largest error or uncertainty dominates the smaller ones. It is useful to calculate how the error propagates, so we know how reliable our results are.

In metrology, measurement uncertainty is the expression of the statistical dispersion of the values attributed to a measured quantity. All measurements are subject to uncertainty and a measurement result is complete only when it is accompanied by a statement of the associated uncertainty, such as the standard deviation. By international agreement, this uncertainty has a probabilistic basis and reflects incomplete knowledge of the quantity value. It is a non-negative parameter.[1]

The measurement uncertainty is often taken as the standard deviation of a state-of-knowledge probability distribution over the possible values that could be attributed to a measured quantity. Relative uncertainty is the measurement uncertainty relative to the magnitude of a particular single choice for the value for the measured quantity, when this choice is nonzero. This particular single choice is usually called the measured value, which may be optimal in some well-defined sense (e.g., a mean, median, or mode). Thus, the relative measurement uncertainty is the measurement uncertainty divided by the absolute value of the measured value, when the measured value is not zero.

Background[edit]

The purpose of measurement is to provide information about a quantity of interest – a measurand. For example, the measurand might be the size of a cylindrical feature, the volume of a vessel, the potential difference between the terminals of a battery, or the mass concentration of lead in a flask of water.

No measurement is exact. When a quantity is measured, the outcome depends on the measuring system, the measurement procedure, the skill of the operator, the environment, and other effects.[2] Even if the quantity were to be measured several times, in the same way and in the same circumstances, a different measured value would in general be obtained each time, assuming the measuring system has sufficient resolution to distinguish between the values.

The dispersion of the measured values would relate to how well the measurement is performed.
Their average would provide an estimate of the true value of the quantity that generally would be more reliable than an individual measured value.
The dispersion and the number of measured values would provide information relating to the average value as an estimate of the true value.
However, this information would not generally be adequate.

The measuring system may provide measured values that are not dispersed about the true value, but about some value offset from it. Take a domestic bathroom scale. Suppose it is not set to show zero when there is nobody on the scale, but to show some value offset from zero. Then, no matter how many times the person’s mass were re-measured, the effect of this offset would be inherently present in the average of the values.

The «Guide to the Expression of Uncertainty in Measurement» (commonly known as the GUM) is the definitive document on this subject. The GUM has been adopted by all major National Measurement Institutes (NMIs) and by international laboratory accreditation standards such as ISO/IEC 17025 General requirements for the competence of testing and calibration laboratories, which is required for international laboratory accreditation; and is employed in most modern national and international documentary standards on measurement methods and technology. See Joint Committee for Guides in Metrology.

Measurement uncertainty has important economic consequences for calibration and measurement activities. In calibration reports, the magnitude of the uncertainty is often taken as an indication of the quality of the laboratory, and smaller uncertainty values generally are of higher value and of higher cost. The American Society of Mechanical Engineers (ASME) has produced a suite of standards addressing various aspects of measurement uncertainty. For example, ASME standards are used to address the role of measurement uncertainty when accepting or rejecting products based on a measurement result and a product specification,[3] provide a simplified approach (relative to the GUM) to the evaluation of dimensional measurement uncertainty,[4] resolve disagreements over the magnitude of the measurement uncertainty statement,[5] or provide guidance on the risks involved in any product acceptance/rejection decision.[6]

Indirect measurement[edit]

The above discussion concerns the direct measurement of a quantity, which incidentally occurs rarely. For example, the bathroom scale may convert a measured extension of a spring into an estimate of the measurand, the mass of the person on the scale. The particular relationship between extension and mass is determined by the calibration of the scale. A measurement model converts a quantity value into the corresponding value of the measurand.

There are many types of measurement in practice and therefore many models. A simple measurement model (for example for a scale, where the mass is proportional to the extension of the spring) might be sufficient for everyday domestic use. Alternatively, a more sophisticated model of a weighing, involving additional effects such as air buoyancy, is capable of delivering better results for industrial or scientific purposes. In general there are often several different quantities, for example temperature, humidity and displacement, that contribute to the definition of the measurand, and that need to be measured.

Correction terms should be included in the measurement model when the conditions of measurement are not exactly as stipulated. These terms correspond to systematic errors. Given an estimate of a correction term, the relevant quantity should be corrected by this estimate. There will be an uncertainty associated with the estimate, even if the estimate is zero, as is often the case. Instances of systematic errors arise in height measurement, when the alignment of the measuring instrument is not perfectly vertical, and the ambient temperature is different from that prescribed. Neither the alignment of the instrument nor the ambient temperature is specified exactly, but information concerning these effects is available, for example the lack of alignment is at most 0.001° and the ambient temperature at the time of measurement differs from that stipulated by at most 2 °C.

As well as raw data representing measured values, there is another form of data that is frequently needed in a measurement model. Some such data relate to quantities representing physical constants, each of which is known imperfectly. Examples are material constants such as modulus of elasticity and specific heat. There are often other relevant data given in reference books, calibration certificates, etc., regarded as estimates of further quantities.

The items required by a measurement model to define a measurand are known as input quantities in a measurement model. The model is often referred to as a functional relationship. The output quantity in a measurement model is the measurand.

Formally, the output quantity, denoted by Y, about which information is required, is often related to input quantities, denoted by X_1,ldots,X_N, about which information is available, by a measurement model in the form of

Y = f(X_1,ldots,X_N),

where f is known as the measurement function. A general expression for a measurement model is

h(Y, X_1,ldots,X_N) = 0.

It is taken that a procedure exists for calculating Y given X_1,ldots,X_N, and that Y is uniquely defined by this equation.

Propagation of distributions[edit]

The true values of the input quantities X_1,ldots,X_N are unknown.
In the GUM approach, X_1,ldots,X_N are characterized by probability distributions and treated mathematically as random variables.
These distributions describe the respective probabilities of their true values lying in different intervals, and are assigned based on available knowledge concerning X_1,ldots,X_N.
Sometimes, some or all of X_1,ldots, X_N are interrelated and the relevant distributions, which are known as joint, apply to these quantities taken together.

Consider estimates x_1,ldots,x_N, respectively, of the input quantities X_1,ldots,X_N, obtained from certificates and reports, manufacturers’ specifications, the analysis of measurement data, and so on.
The probability distributions characterizing X_1,ldots,X_N are chosen such that the estimates x_1,ldots,x_N, respectively, are the expectations[7] of X_1,ldots,X_N.
Moreover, for the ith input quantity, consider a so-called standard uncertainty, given the symbol u(x_i), defined as the standard deviation[7] of the input quantity X_{i}.
This standard uncertainty is said to be associated with the (corresponding) estimate x_{i}.

The use of available knowledge to establish a probability distribution to characterize each quantity of interest applies to the X_{i} and also to Y.
In the latter case, the characterizing probability distribution for Y is determined by the measurement model together with the probability distributions for the X_{i}.
The determination of the probability distribution for Y from this information is known as the propagation of distributions.[7]

The figure below depicts a measurement model Y = X_1 + X_2 in the case where X_{1} and X_{2} are each characterized by a (different) rectangular, or uniform, probability distribution.
Y has a symmetric trapezoidal probability distribution in this case.

An additive measurement function with two input quantities '"`UNIQ--postMath-00000020-QINU`"' and '"`UNIQ--postMath-00000021-QINU`"' characterized by rectangular probability distributions

Once the input quantities X_1,ldots,X_N have been characterized by appropriate probability distributions, and the measurement model has been developed, the probability distribution for the measurand Y is fully specified in terms of this information. In particular, the expectation of Y is used as the estimate of Y, and the standard deviation of Y as the standard uncertainty associated with this estimate.

Often an interval containing Y with a specified probability is required. Such an interval, a coverage interval, can be deduced from the probability distribution for Y. The specified probability is known as the coverage probability. For a given coverage probability, there is more than one coverage interval. The probabilistically symmetric coverage interval is an interval for which the probabilities (summing to one minus the coverage probability) of a value to the left and the right of the interval are equal. The shortest coverage interval is an interval for which the length is least over all coverage intervals having the same coverage probability.

Prior knowledge about the true value of the output quantity Y can also be considered. For the domestic bathroom scale, the fact that the person’s mass is positive, and that it is the mass of a person, rather than that of a motor car, that is being measured, both constitute prior knowledge about the possible values of the measurand in this example. Such additional information can be used to provide a probability distribution for Y that can give a smaller standard deviation for Y and hence a smaller standard uncertainty associated with the estimate of Y.[8][9][10]

Type A and Type B evaluation of uncertainty[edit]

Knowledge about an input quantity X_{i} is inferred from repeated measured values («Type A evaluation of uncertainty»), or scientific judgement or other information concerning the possible values of the quantity («Type B evaluation of uncertainty»).

In Type A evaluations of measurement uncertainty, the assumption is often made that the distribution best describing an input quantity X given repeated measured values of it (obtained independently) is a Gaussian distribution.
X then has expectation equal to the average measured value and standard deviation equal to the standard deviation of the average.
When the uncertainty is evaluated from a small number of measured values (regarded as instances of a quantity characterized by a Gaussian distribution), the corresponding distribution can be taken as a t-distribution.[11]
Other considerations apply when the measured values are not obtained independently.

For a Type B evaluation of uncertainty, often the only available information is that X lies in a specified interval [a,b].
In such a case, knowledge of the quantity can be characterized by a rectangular probability distribution[11] with limits a and b.
If different information were available, a probability distribution consistent with that information would be used.[12]

Sensitivity coefficients[edit]

Sensitivity coefficients c_1,ldots,c_N describe how the estimate y of Y would be influenced by small changes in the estimates x_1,ldots,x_N of the input quantities X_1,ldots,X_N.
For the measurement model Y = f(X_1,ldots,X_N), the sensitivity coefficient c_{i} equals the partial derivative of first order of f with respect to X_{i} evaluated at X_1 = x_1, X_2 = x_2, etc.
For a linear measurement model

Y = c_1 X_1 + cdots + c_N X_N,

with X_1,ldots,X_N independent, a change in x_{i} equal to u(x_i) would give a change c_i u(x_i) in y.
This statement would generally be approximate for measurement models Y = f(X_1,ldots,X_N).
The relative magnitudes of the terms |c_i|u(x_i) are useful in assessing the respective contributions from the input quantities to the standard uncertainty u(y) associated with y.
The standard uncertainty u(y) associated with the estimate y of the output quantity Y is not given by the sum of the |c_i|u(x_i), but these terms combined in quadrature,[1] namely by an expression that is generally approximate for measurement models Y = f(X_1,ldots,X_N):

u^2(y) = c_1^2u^2(x_1) + cdots + c_N^2u^2(x_N),

which is known as the law of propagation of uncertainty.

When the input quantities X_{i} contain dependencies, the above formula is augmented by terms containing covariances,[1] which may increase or decrease u(y).

Uncertainty evaluation[edit]

The main stages of uncertainty evaluation constitute formulation and calculation, the latter consisting of propagation and summarizing.
The formulation stage constitutes

  1. defining the output quantity Y (the measurand),
  2. identifying the input quantities on which Y depends,
  3. developing a measurement model relating Y to the input quantities, and
  4. on the basis of available knowledge, assigning probability distributions — Gaussian, rectangular, etc. — to the input quantities (or a joint probability distribution to those input quantities that are not independent).

The calculation stage consists of propagating the probability distributions for the input quantities through the measurement model to obtain the probability distribution for the output quantity Y, and summarizing by using this distribution to obtain

  1. the expectation of Y, taken as an estimate y of Y,
  2. the standard deviation of Y, taken as the standard uncertainty u(y) associated with y, and
  3. a coverage interval containing Y with a specified coverage probability.

The propagation stage of uncertainty evaluation is known as the propagation of distributions, various approaches for which are available, including

  1. the GUM uncertainty framework, constituting the application of the law of propagation of uncertainty, and the characterization of the output quantity Y by a Gaussian or a t-distribution,
  2. analytic methods, in which mathematical analysis is used to derive an algebraic form for the probability distribution for Y, and
  3. a Monte Carlo method,[7] in which an approximation to the distribution function for Y is established numerically by making random draws from the probability distributions for the input quantities, and evaluating the model at the resulting values.

For any particular uncertainty evaluation problem, approach 1), 2) or 3) (or some other approach) is used, 1) being generally approximate, 2) exact, and 3) providing a solution with a numerical accuracy that can be controlled.

Models with any number of output quantities[edit]

When the measurement model is multivariate, that is, it has any number of output quantities, the above concepts can be extended.[13] The output quantities are now described by a joint probability distribution, the coverage interval becomes a coverage region, the law of propagation of uncertainty has a natural generalization, and a calculation procedure that implements a multivariate Monte Carlo method is available.

Uncertainty as an interval[edit]

The most common view of measurement uncertainty uses random variables as mathematical models for uncertain quantities and simple probability distributions as sufficient for representing measurement uncertainties. In some situations, however, a mathematical interval might be a better model of uncertainty than a probability
distribution. This may include situations involving periodic measurements, binned data values, censoring, detection limits, or plus-minus ranges of measurements where no particular probability distribution seems justified or where one cannot assume that the errors among individual measurements are completely independent.[citation needed]

A more robust representation of measurement uncertainty in such cases can be fashioned from intervals.[14][15] An interval [ab] is different from a rectangular or uniform probability distribution over the same range in that the latter suggests that the true value lies inside the right half of the range [(a + b)/2, b] with probability one half, and within any subinterval of [ab] with probability equal to the width of the subinterval divided by b − a. The interval makes no such claims, except simply that the measurement lies somewhere within the interval. Distributions of such measurement intervals can be summarized as probability boxes and Dempster–Shafer structures over the real numbers, which incorporate both aleatoric and epistemic uncertainties.

See also[edit]

  • Accuracy and precision
  • Confidence interval
  • Experimental uncertainty analysis
  • History of measurement
  • List of uncertainty propagation software
  • Propagation of uncertainty
  • Repeatability
  • Set identification
  • Test method
  • Uncertainty
  • Uncertainty quantification
  • Random-fuzzy variable

References[edit]

  1. ^ a b c JCGM 100:2008. Evaluation of measurement data – Guide to the expression of uncertainty in measurement, Joint Committee for Guides in Metrology.
  2. ^ Bell, S. Measurement Good Practice Guide No. 11. A Beginner’s Guide to Uncertainty of Measurement. Tech. rep., National Physical Laboratory, 1999.
  3. ^ ASME B89.7.3.1, Guidelines for Decision Rules in Determining Conformance to Specifications
  4. ^ ASME B89.7.3.2, Guidelines for the Evaluation of Dimensional Measurement Uncertainty
  5. ^ ASME B89.7.3.3, Guidelines for Assessing the Reliability of Dimensional Measurement Uncertainty Statements
  6. ^ ASME B89.7.4, Measurement Uncertainty and Conformance Testing: Risk Analysis
  7. ^ a b c d JCGM 101:2008. Evaluation of measurement data – Supplement 1 to the «Guide to the expression of uncertainty in measurement» – Propagation of distributions using a Monte Carlo method. Joint Committee for Guides in Metrology.
  8. ^ Bernardo, J., and Smith, A. «Bayesian Theory». John Wiley & Sons, New York, USA, 2000. 3.20
  9. ^ Elster, Clemens (2007). «Calculation of uncertainty in the presence of prior knowledge». Metrologia. 44 (2): 111–116. Bibcode:2007Metro..44..111E. doi:10.1088/0026-1394/44/2/002. S2CID 123445853.
  10. ^ EURACHEM/CITAC. «Quantifying uncertainty in analytical measurement». Tech. Rep. Guide CG4, EU-RACHEM/CITEC, EURACHEM/CITAC Guide], 2000. Second edition.
  11. ^ a b JCGM 104:2009. Evaluation of measurement data – An introduction to the «Guide to the expression of uncertainty in measurement» and related documents. Joint Committee for Guides in Metrology.
  12. ^ Weise, K.; Woger, W. (1993). «A Bayesian theory of measurement uncertainty». Measurement Science and Technology. 4 (1): 1–11. Bibcode:1993MeScT…4….1W. doi:10.1088/0957-0233/4/1/001. S2CID 250751314.
  13. ^ Joint Committee for Guides in Metrology (2011). JCGM 102: Evaluation of Measurement Data – Supplement 2 to the «Guide to the Expression of Uncertainty in Measurement» – Extension to Any Number of Output Quantities (PDF) (Technical report). JCGM. Retrieved 13 February 2013.
  14. ^ Manski, C.F. (2003); Partial Identification of Probability Distributions, Springer Series in Statistics, Springer, New York
  15. ^ Ferson, S., V. Kreinovich, J. Hajagos, W. Oberkampf, and L. Ginzburg (2007); Experimental Uncertainty Estimation and Statistics for Data Having Interval Uncertainty, Sandia National Laboratories SAND 2007-0939

Further reading[edit]

  • Bich, W., Cox, M. G., and Harris, P. M. Evolution of the «Guide to the Expression of Uncertainty in Measurement». Metrologia, 43(4):S161–S166, 2006.
  • Cox, M. G., and Harris, P. M. SSfM Best Practice Guide No. 6, Uncertainty evaluation. Technical report DEM-ES-011, National Physical Laboratory, 2006.
  • Cox, M. G., and Harris, P. M . Software specifications for uncertainty evaluation. Technical report DEM-ES-010, National Physical Laboratory, 2006.
  • Grabe, M ., Measurement Uncertainties in Science and Technology, Springer 2005.
  • Grabe, M. Generalized Gaussian Error Calculus, Springer 2010.
  • Dietrich, C. F. (1991). Uncertainty, Calibration and Probability. Bristol, UK: Adam Hilger.
  • EA. Expression of the uncertainty of measurement in calibration. Technical Report EA-4/02, European Co-operation for Accreditation, 1999.
  • Elster, C., and Toman, B. Bayesian uncertainty analysis under prior ignorance of the measurand versus analysis using Supplement 1 to the Guide: a comparison. Metrologia, 46:261–266, 2009.
  • Ferson, S.; Kreinovich, V.; Hajagos, J.; Oberkampf, W.; Ginzburg, L. (2007). «Experimental Uncertainty Estimation and Statistics for Data Having Interval Uncertainty» (PDF).
  • Lira., I. Evaluating the Uncertainty of Measurement. Fundamentals and Practical Guidance. Institute of Physics, Bristol, UK, 2002.
  • Majcen N., Taylor P. (Editors), Practical examples on traceability, measurement uncertainty and validation in chemistry, Vol 1, 2010; ISBN 978-92-79-12021-3.
  • Possolo A and Iyer H K 2017 Concepts and tools for the evaluation of measurement uncertainty Rev. Sci. Instrum.,88 011301 (2017).
  • UKAS. The expression of uncertainty in EMC testing. Technical Report LAB34, United Kingdom Accreditation Service, 2002.
  • UKAS M3003 The Expression of Uncertainty and Confidence in Measurement (Edition 3, November 2012) UKAS
  • ASME PTC 19.1, Test Uncertainty, New York: The American Society of Mechanical Engineers; 2005
  • Rouaud, M. (2013), Propagation of Uncertainties in Experimental Measurement (PDF) (short ed.)
  • Da Silva, R.B.; Bulska, E.; Godlewska-Zylkiewicz, B.; Hedrich, M.; Majcen, N.; Magnusson, B.; Marincic, S.; Papadakis, I.; Patriarca, M.; Vassileva, E.; Taylor, P. (2012). Analytical measurement: measurement uncertainty and statistics. ISBN 978-92-79-23070-7.
  • Arnaut, L. R. (2008). «Measurement uncertainty in reverberation chambers – I. Sample statistics. Technical report TQE 2» (PDF) (2nd ed.). National Physical Laboratory. Archived from the original (PDF) on 2016-03-04. Retrieved 2013-09-26.
  • Leito, I.; Jalukse, L.; Helm, I. (2013). «Estimation of measurement uncertainty in chemical analysis (analytical chemistry)] On-line course». University of Tartu.

External links[edit]

  • NPLUnc
  • Estimate of temperature and its uncertainty in small systems, 2011.
  • Introduction to evaluating uncertainty of measurement
  • JCGM 200:2008. International Vocabulary of Metrology – Basic and general concepts and associated terms, 3rd Edition. Joint Committee for Guides in Metrology.
  • ISO 3534-1:2006. Statistics – Vocabulary and symbols – Part 1: General statistical terms and terms used in probability. ISO
  • JCGM 106:2012. Evaluation of measurement data – The role of measurement uncertainty in conformity assessment. Joint Committee for Guides in Metrology.
  • NIST. Uncertainty of measurement results.

Measurement error and measurement uncertainty

GUM

  • estimation of measurement uncertainty – Examples


Measurement error

Measurement error analysis

Aims:

  • describe how measurement result or test has been made, and
  • make up a so complete list (“budget”) as possible of all potential error sources which might affect result.

Some measurement errors can be reduced by:

  • repeated measurement (reduction of random errors) – more exactly, errors are not reduced but mean value will be better representation of reality.
  • application of corrections (reduction of systematic errors) through calibration

Random and systematic measurement errors

Regarding repeated measurements of a particular measurement object under given conditions, then the error can normally be divided into three components:

  • A component of measurement error which varies randomly between measurements and is assumed to have a mean value = 0.
  • A component which is constant during the actual measurements – a (locally) systematic error
  • A component which varies systematically during the actual measurement

Normally these components are only partially known and give contributions to the uncertainty of the measurement value.

The random component has a real distribution with repeated measurement and, since these errors cannot be predicted and the expectation value = 0, then no correction for them can be made. The uncertainty can appropriately be expressed as an interval (about 0) which covers a given proportion of the distribution.

2. Correct for known errors

2. Correct for known errors

If the systematic errors are known – both the constant and systematically varying components – , then one of course can correct for them and they thus do not contribute to the uncertainty. If the systematic error has been estimated in some way, then it can also be corrected for, but residual uncertainties in these estimates must be included in the total uncertainty.

In cases where it is impractical to make repeated estimates of systematic errors, it may still be possible to imagine doing it and thereby think of a plausible distribution of possible correction errors which may be treated mathematically as above. In these cases, if a ‘guessestimate’ of the systematic error has been made, then no correction is usually made. However, allowance must still be made for systematic errors and also contributions to the uncertainty. For instance, a correction of zero can be said to have been made, but this is an uncertain zero, which as earlier may have an associated correction error for which an uncertainty should be given.

Uncertainty in a measurement value – “Unknown measurement errors”

is an interval which expresses our lack of knowledge of the real value of the measurement error. For practical use, the measurement uncertainty should be interpreted as follows: With our present knowledge of the measurement error structure, one expects that the measurement error is less than the measurement uncertainty with at least an approximate probability.

Measurement uncertainty and knowledge

In most cases there is seldom time or resources to investigate all possible sources of measurement error. Where knowledge about measurement is limited – as it always is – then measurement result will have an uncertainty.

  • information and measurement

____________________________________

‘Unknown’ measurement errors, examples:

  • Unstable travelling standards
  • Barometer in need of repair

__________________________________


  • Measurement quality
  • Measurement error
    • Measurement error analysis
    • Random and systematic errors
  • Measurement accuracy
    • Comparability & Commensurability
    • Concept system for ‘Quantity’
  • Specifications of Process and Measurement Capabilities
  • Uncertainty in a measurement value
    • Error models and measurement uncertainty
  • Estimating measurement uncertainty
  • Conformity assessment
    • Decisions

Понравилась статья? Поделить с друзьями:
  • Mega downloader error
  • Measurement accuracy and error
  • Medieval dynasty ошибка lowlevelfatalerror
  • Measure error перевод
  • Medieval 2 ошибка при запуске 0xc0000906