Uncertainties of the measurements and error propagation laboratory work

Suppose two measured quantities x and y have uncertainties, Dx and Dy, determined by procedures described in previous sections: we would report (x ± Dx), and (y ± Dy). From the measured quantities a new quantity, z, is calculated from x and y. What is the uncertainty, Dz, in z? For the purposes of this course we will use a simplified version of the proper statistical treatment. The formulas for a full statistical treatment (using standard deviations) will also be given. The guiding principle in all cases is to consider the most pessimistic situation. Full explanations are covered in statistics courses.

2. Determining random
errors.

3. What is the range of
possible
values?

4. Relative and Absolute
Errors

5. Propagation of Errors, Basic
Rules

Suppose two measured quantities x and y have uncertainties, Dx
and Dy, determined by procedures described
in previous
sections: we would report (x ± Dx),
and
(y ± Dy). From the measured
quantities a
new quantity, z, is calculated from x and y. What is the uncertainty,
Dz,
in z? For the purposes of this course we will use a simplified
version of
the proper statistical treatment. The formulas for a full statistical
treatment
(using standard deviations) will also be given. The guiding principle
in all
cases is to consider the most pessimistic situation. Full
explanations are
covered in statistics courses.

The examples included in this section also show the proper rounding
of answers,
which is covered in more detail in Section 6.  The examples use
the propagation
of errors using average deviations.

(a) Addition and Subtraction: z = x +
y    
or    z = x — y

Go to top of document.

6. Rounding off answers in regular and
scientific
notation.

In the above examples we were careful to round the answers to an
appropriate
number of significant figures. The uncertainty should be rounded off
to one
or two significant figures. If the leading figure in the uncertainty
is a
1, we use two significant figures, otherwise we use one significant
figure.
Then the answer should be rounded to match.

Example Round off z = 12.0349 cm and Dz
= 0.153 cm.

Since Dz begins with a 1, we round
off Dz
to two significant figures:

Dz = 0.15 cm. Hence, round z to
have the
same number of decimal places:

z = (12.03 ± 0.15) cm.

When the answer is given in scientific notation, the uncertainty
should be
given in scientific notation with the same power of ten. Thus,
if

z = 1.43 x
s and Dz = 2 x s,

we should write our answer as

z = (1.43± 0.02) x s.

This notation makes the range of values most easily understood. The
following
is technically correct, but is hard to understand at a glance.

z = (1.43 x
± 2 x )
s. Don’t write like this!

Problem:  Express the
following
results in proper rounded form, x ± Dx.

(i) m = 14.34506 grams, Dm =
0.04251
grams.
(ii) t = 0.02346 sec, Dt = 1.623
x 10-3sec.

(iii) M = 7.35 x
kg DM = 2.6 x kg.

(iv) m = 9.11 x kg
Dm = 2.2345 x kg   
Answer

Go to top of document.

7. Significant Figures

The rules for propagation of errors hold true for cases when we are
in the
lab, but doing propagation of errors is time consuming. The rules for
significant
figures
allow a much quicker method to get results that are
approximately
correct even when we have no uncertainty values.

A significant figure is any digit 1 to 9 and any zero which is not a
place
holder. Thus, in 1.350 there are 4 significant figures since the zero
is not
needed to make sense of the number. In a number like 0.00320 there
are 3 significant
figures —the first three zeros are just place holders. However the
number
1350 is ambiguous. You cannot tell if there are 3 significant figures
—the
0 is only used to hold the units place —or if there are 4
significant figures
and the zero in the units place was actually measured to be zero.

How do we resolve ambiguities that arise with zeros when we need to
use zero
as a place holder as well as a significant figure? Suppose we measure
a length
to three significant figures as 8000 cm. Written this way we cannot
tell if
there are 1, 2, 3, or 4 significant figures. To make the number of
significant
figures apparent we use scientific notation, 8 x
cm (which has one significant figure), or 8.00 x
cm (which has three significant figures), or whatever is correct
under the
circumstances.

We start then with numbers each with their own number of significant
figures
and compute a new quantity. How many significant figures should be in
the
final answer? In doing running computations we maintain numbers to
many figures,
but we must report the answer only to the proper number of
significant figures.

In the case of addition and subtraction we can best explain with an
example.
Suppose one object is measured to have a mass of 9.9 gm and a second
object
is measured on a different balance to have a mass of 0.3163 gm. What
is the
total mass? We write the numbers with question marks at places where
we lack
information. Thus 9.9???? gm and 0.3163? gm. Adding them with the
decimal
points lined up we see

09.9????
00.3163?
10.2???? = 10.2 gm.

In the case of multiplication or division we can use the same idea
of unknown
digits. Thus the product of 3.413? and 2.3? can be written in long
hand as

3.413?
2.3?
   ?????
 10239?
 6826?
7.8????? = 7.8

The short rule for multiplication and division is that the answer
will contain
a number of significant figures equal to the number of significant
figures
in the entering number having the least number of significant
figures. In
the above example 2.3 had 2 significant figures while 3.413 had 4, so
the
answer is given to 2 significant figures.

It is important to keep these concepts in mind as you use
calculators with
8 or 10 digit displays if you are to avoid mistakes in your answers
and to
avoid the wrath of physics instructors everywhere.  A good
procedure
to use is to use use all digits (significant or not) throughout
calculations,
and only round off the answers to appropriate «sig fig.»

Problem:  How many
significant figures
are there in each of the following?    Answer

(i) 0.00042   (ii) 0.14700   (ii) 4.2 x
  
(iv) -154.090 x

Go to top of document.

8. Problems on Uncertainties and Error
Propagation.

Try the following problems to see if you understand the details of
this part
. The answers are at the end.

(a) Find the average and the average deviation of the following
measurements
of a mass.

4.32, 4.35, 4.31, 4.36, 4.37, 4.34 grams.

(b) Express the following results in proper rounded form, x
± Dx.

(i) m = 14.34506 grams, Dm = 0.04251
grams.

(ii) t = 0.02346 sec, Dt = 1.623 x
sec.

(iii) M = 7.35 x kg
DM = 2.6 x kg.

(iv) m = 9.11 x kg
Dm = 2.2345 x kg

(c) Are the following numbers equal within the expected range of
values?

(i) (3.42 ± 0.04) m/s and 3.48 m/s?
(ii) (13.106 ± 0.014) grams and 13.206 grams?
(iii) (2.95 ± 0.03) x m/s
and 3.00 x
m/s

(d) Calculate z and Dz for each of the
following
cases.

(i) z = (x — 2.5 y + w) for x = (4.72 ± 0.12) m, y =
(4.4 ±
0.2) m, w = (15.63 ± 0.16) m.
(ii) z = (w x/y) for w = (14.42 ± 0.03) m/,

x = (3.61 ± 0.18) m, y = (650 ± 20) m/s.
(iii) z = for
x = (3.55 ± 0.15) m.
(iv) z = v (xy + w) with v = (0.644 ± 0.004) m, x = (3.42
±
0.06) m, y = (5.00 ± 0.12) m, w =    (12.13
±
0.08).

(v) z = A sin y for A = (1.602 ± 0.007) m/s, y = (0.774
±
0.003) rad.

(e) How many significant figures are there in each of the
following?

(i) 0.00042   (ii) 0.14700   (ii) 4.2 x
  
(iv) -154.090 x 10-27

(f) I measure a length with a meter stick which has a least count
of 1
mm I measure the length 5 times with  results in mm of 123,
123, 124,
123, 123 mm. What is the average length and the uncertainty in
length?

Go to top of document.


Answers for Section 8:

(a) (4.342 ± 0.018) grams

(b)  i) (14.34 ± 0.04)
grams      ii)
(0.0235 ± 0.0016) sec or (2.35 ± 0.16) x
sec
   iii) (7.35 ± 0.03) x kg  
iv) (9.11 ± 0.02) x kg

(c) Yes for (i) and (iii), no for (ii)

(d) i) (9.4 ± 0.8) m   ii) (0.080 ± 0.007)
m/s  
iii) (45 ± 6)   
iv) 18.8 ± 0.6)   
v) (1.120 ± 0.008 m/s

(e) i) 2   ii) 5   iii) 2   iv) 6

(f) (123 ± 1) mm (I used the ILE = least count since it is
larger
than the average deviation.)
 

9. Glossary of Important Terms

This particular lab is going to focus on the fundamentals of measurement: what does it mean to make a good measurement and how do we quantify the uncertainty on that measurement? These questions are really going to be the focus of this lab. Along the way, you will also be introduced to fundamental statistics concepts like mean and standard deviation that you might have seen in a statistics course. You will also be thinking about how to represent uncertainty: you will explore different options including:

  • significant figures – which most of you have probably seen.
  • the crank three times method – simple to implement but somewhat limited.
  • Monte Carlo error propagation – this is the error propagation technique that is actually used in most research in the modern times because it’s the easiest to implement with complex datasets and complex formulas.

In order to do Monte Carlo effectively, you are also going to learn several different spreadsheet techniques. As stated in the syllabus, we are going to use Google Sheets to teach you spreadsheets because it is the easiest for us to help you learn. The basic ideas, however, work for Excel or Numbers or any of the other spreadsheet programs out there. As was said in the introduction to the lab, spreadsheets area great skill that are used in many different careers and in many different contexts. So, along the way you’re going to learn some nice spreadsheet stuff.

The measurement that you’re going to do in this lab might seem relatively simple: all you’re going to measure is the volume of a U.S. nickel. You will need to measure the radius and the thickness of your nickel. All the materials you need are ten US nickels and a metric ruler (things that you can probably get pretty easily). Again, these measurements may seem relatively simple. They are. That is on purpose. We want to focus first on an easy-to-make measurement so that we can really think about what the uncertainties are and how these uncertainties, from just the radius and thickness, propagate through a calculation, into the volume. We want to do this with a simple measurement that we can fully understand first before we move to something more complex. I think, along the way, you will be surprised how hard measuring something that seems as relatively simple as the volume of a nickel actually can be.

As usual, the lab as usual will guide you through with a series of questions and instructional materials. Do not hesitate to reach out to your lab TA for help if you need it. For any multiple-choice question, you will get multiple attempts. There will be a way to check your answers. There is a small deduction in credit for each attempt (let’s be honest, we don’t want you just guessing!).

Have fun really thinking about how to make measurements and what uncertainty really means!

From Wikipedia, the free encyclopedia

In statistics, propagation of uncertainty (or propagation of error) is the effect of variables’ uncertainties (or errors, more specifically random errors) on the uncertainty of a function based on them. When the variables are the values of experimental measurements they have uncertainties due to measurement limitations (e.g., instrument precision) which propagate due to the combination of variables in the function.

The uncertainty u can be expressed in a number of ways.
It may be defined by the absolute error Δx. Uncertainties can also be defined by the relative error x)/x, which is usually written as a percentage.
Most commonly, the uncertainty on a quantity is quantified in terms of the standard deviation, σ, which is the positive square root of the variance. The value of a quantity and its error are then expressed as an interval x ± u.
However, the most general way of characterizing uncertainty is by specifying its probability distribution.
If the probability distribution of the variable is known or can be assumed, in theory it is possible to get any of its statistics. In particular, it is possible to derive confidence limits to describe the region within which the true value of the variable may be found. For example, the 68% confidence limits for a one-dimensional variable belonging to a normal distribution are approximately ± one standard deviation σ from the central value x, which means that the region x ± σ will cover the true value in roughly 68% of cases.

If the uncertainties are correlated then covariance must be taken into account. Correlation can arise from two different sources. First, the measurement errors may be correlated. Second, when the underlying values are correlated across a population, the uncertainties in the group averages will be correlated.[1]

In a general context where a nonlinear function modifies the uncertain parameters (correlated or not), the standard tools to propagate uncertainty, and infer resulting quantity probability distribution/statistics, are sampling techniques from the Monte Carlo method family.[2] For very expensive data or complex functions, the calculation of the error propagation may be very expensive so that a surrogate model[3] or a parallel computing strategy[4][5][6] may be necessary.

In some particular cases, the uncertainty propagation calculation can be done through simplistic algebraic procedures. Some of these scenarios are described below.

Linear combinations[edit]

Let {displaystyle {f_{k}(x_{1},x_{2},dots ,x_{n})}} be a set of m functions, which are linear combinations of n variables x_{1},x_{2},dots ,x_{n} with combination coefficients {displaystyle A_{k1},A_{k2},dots ,A_{kn},(k=1,dots ,m)}:

{displaystyle f_{k}=sum _{i=1}^{n}A_{ki}x_{i},}

or in matrix notation,

{displaystyle mathbf {f} =mathbf {Ax} .}

Also let the variance–covariance matrix of x = (x1, …, xn) be denoted by {displaystyle {boldsymbol {Sigma }}^{x}} and let the mean value be denoted by mathbf {mu } :

{displaystyle {boldsymbol {Sigma }}^{x}=E[mathbf {(x-mu )} otimes mathbf {(x-mu )} ]={begin{pmatrix}sigma _{1}^{2}&sigma _{12}&sigma _{13}&cdots \sigma _{21}&sigma _{2}^{2}&sigma _{23}&cdots \sigma _{31}&sigma _{32}&sigma _{3}^{2}&cdots \vdots &vdots &vdots &ddots end{pmatrix}}={begin{pmatrix}{Sigma }_{11}^{x}&{Sigma }_{12}^{x}&{Sigma }_{13}^{x}&cdots \{Sigma }_{21}^{x}&{Sigma }_{22}^{x}&{Sigma }_{23}^{x}&cdots \{Sigma }_{31}^{x}&{Sigma }_{32}^{x}&{Sigma }_{33}^{x}&cdots \vdots &vdots &vdots &ddots end{pmatrix}}.}

otimes is the outer product.

Then, the variance–covariance matrix {displaystyle {boldsymbol {Sigma }}^{f}} of f is given by

{displaystyle {boldsymbol {Sigma }}^{f}=E[(mathbf {f} -E[mathbf {f} ])otimes (mathbf {f} -E[mathbf {f} ])]=E[mathbf {A(x-mu )} otimes mathbf {A(x-mu )} ]=mathbf {A} E[mathbf {(x-mu )} otimes mathbf {(x-mu )} ]mathbf {A} ^{mathrm {T} }=mathbf {A} {boldsymbol {Sigma }}^{x}mathbf {A} ^{mathrm {T} }}

In component notation, the equation

{displaystyle {boldsymbol {Sigma }}^{f}=mathbf {A} {boldsymbol {Sigma }}^{x}mathbf {A} ^{mathrm {T} }.}

reads

{displaystyle Sigma _{ij}^{f}=sum _{k}^{n}sum _{l}^{n}A_{ik}{Sigma }_{kl}^{x}A_{jl}.}

This is the most general expression for the propagation of error from one set of variables onto another. When the errors on x are uncorrelated, the general expression simplifies to

{displaystyle Sigma _{ij}^{f}=sum _{k}^{n}A_{ik}Sigma _{k}^{x}A_{jk},}

where {displaystyle Sigma _{k}^{x}=sigma _{x_{k}}^{2}} is the variance of k-th element of the x vector.
Note that even though the errors on x may be uncorrelated, the errors on f are in general correlated; in other words, even if {displaystyle {boldsymbol {Sigma }}^{x}} is a diagonal matrix, {displaystyle {boldsymbol {Sigma }}^{f}} is in general a full matrix.

The general expressions for a scalar-valued function f are a little simpler (here a is a row vector):

{displaystyle f=sum _{i}^{n}a_{i}x_{i}=mathbf {ax} ,}
{displaystyle sigma _{f}^{2}=sum _{i}^{n}sum _{j}^{n}a_{i}Sigma _{ij}^{x}a_{j}=mathbf {a} {boldsymbol {Sigma }}^{x}mathbf {a} ^{mathrm {T} }.}

Each covariance term sigma _{ij} can be expressed in terms of the correlation coefficient rho _{ij} by {displaystyle sigma _{ij}=rho _{ij}sigma _{i}sigma _{j}}, so that an alternative expression for the variance of f is

{displaystyle sigma _{f}^{2}=sum _{i}^{n}a_{i}^{2}sigma _{i}^{2}+sum _{i}^{n}sum _{j(jneq i)}^{n}a_{i}a_{j}rho _{ij}sigma _{i}sigma _{j}.}

In the case that the variables in x are uncorrelated, this simplifies further to

{displaystyle sigma _{f}^{2}=sum _{i}^{n}a_{i}^{2}sigma _{i}^{2}.}

In the simple case of identical coefficients and variances, we find

{displaystyle sigma _{f}={sqrt {n}},|a|sigma .}

For the arithmetic mean, {displaystyle a=1/n}, the result is the standard error of the mean:

{displaystyle sigma _{f}=sigma /{sqrt {n}}.}

Non-linear combinations[edit]

When f is a set of non-linear combination of the variables x, an interval propagation could be performed in order to compute intervals which contain all consistent values for the variables. In a probabilistic approach, the function f must usually be linearised by approximation to a first-order Taylor series expansion, though in some cases, exact formulae can be derived that do not depend on the expansion as is the case for the exact variance of products.[7] The Taylor expansion would be:

f_{k}approx f_{k}^{0}+sum _{i}^{n}{frac  {partial f_{k}}{partial {x_{i}}}}x_{i}

where partial f_{k}/partial x_{i} denotes the partial derivative of fk with respect to the i-th variable, evaluated at the mean value of all components of vector x. Or in matrix notation,

{mathrm  {f}}approx {mathrm  {f}}^{0}+{mathrm  {J}}{mathrm  {x}},

where J is the Jacobian matrix. Since f0 is a constant it does not contribute to the error on f. Therefore, the propagation of error follows the linear case, above, but replacing the linear coefficients, Aki and Akj by the partial derivatives, {frac  {partial f_{k}}{partial x_{i}}} and {frac  {partial f_{k}}{partial x_{j}}}. In matrix notation,[8]

{displaystyle mathrm {Sigma } ^{mathrm {f} }=mathrm {J} mathrm {Sigma } ^{mathrm {x} }mathrm {J} ^{top }.}

That is, the Jacobian of the function is used to transform the rows and columns of the variance-covariance matrix of the argument.
Note this is equivalent to the matrix expression for the linear case with mathrm{J = A}.

Simplification[edit]

Neglecting correlations or assuming independent variables yields a common formula among engineers and experimental scientists to calculate error propagation, the variance formula:[9]

{displaystyle s_{f}={sqrt {left({frac {partial f}{partial x}}right)^{2}s_{x}^{2}+left({frac {partial f}{partial y}}right)^{2}s_{y}^{2}+left({frac {partial f}{partial z}}right)^{2}s_{z}^{2}+cdots }}}

where s_{f} represents the standard deviation of the function f, s_{x} represents the standard deviation of x, s_{y} represents the standard deviation of y, and so forth.

It is important to note that this formula is based on the linear characteristics of the gradient of f and therefore it is a good estimation for the standard deviation of f as long as {displaystyle s_{x},s_{y},s_{z},ldots } are small enough. Specifically, the linear approximation of f has to be close to f inside a neighbourhood of radius {displaystyle s_{x},s_{y},s_{z},ldots }.[10]

Example[edit]

Any non-linear differentiable function, f(a,b), of two variables, a and b, can be expanded as

fapprox f^{0}+{frac  {partial f}{partial a}}a+{frac  {partial f}{partial b}}b

now, taking variance on both sides, and using the formula[11] for variance of a linear combination of variables:

{displaystyle Var(aX+bY)=a^{2}Var(X)+b^{2}Var(Y)+2ab*Cov(X,Y)}

hence:

{displaystyle sigma _{f}^{2}approx left|{frac {partial f}{partial a}}right|^{2}sigma _{a}^{2}+left|{frac {partial f}{partial b}}right|^{2}sigma _{b}^{2}+2{frac {partial f}{partial a}}{frac {partial f}{partial b}}sigma _{ab}}

where {displaystyle sigma _{f}} is the standard deviation of the function f, {displaystyle sigma _{a}} is the standard deviation of a, {displaystyle sigma _{b}} is the standard deviation of b and {displaystyle sigma _{ab}=sigma _{a}sigma _{b}rho _{ab}} is the covariance between a and b.

In the particular case that {displaystyle f=ab}, {frac  {partial f}{partial a}}=b,{frac  {partial f}{partial b}}=a. Then

sigma _{f}^{2}approx b^{2}sigma _{a}^{2}+a^{2}sigma _{b}^{2}+2ab,sigma _{{ab}}

or

{displaystyle left({frac {sigma _{f}}{f}}right)^{2}approx left({frac {sigma _{a}}{a}}right)^{2}+left({frac {sigma _{b}}{b}}right)^{2}+2left({frac {sigma _{a}}{a}}right)left({frac {sigma _{b}}{b}}right)rho _{ab}}

where {displaystyle rho _{ab}} is the correlation between a and b.

When the variables a and b are uncorrelated, {displaystyle rho _{ab}=0}. Then

{displaystyle left({frac {sigma _{f}}{f}}right)^{2}approx left({frac {sigma _{a}}{a}}right)^{2}+left({frac {sigma _{b}}{b}}right)^{2}.}

Caveats and warnings[edit]

Error estimates for non-linear functions are biased on account of using a truncated series expansion. The extent of this bias depends on the nature of the function. For example, the bias on the error calculated for log(1+x) increases as x increases, since the expansion to x is a good approximation only when x is near zero.

For highly non-linear functions, there exist five categories of probabilistic approaches for uncertainty propagation;[12] see Uncertainty quantification for details.

Reciprocal and shifted reciprocal[edit]

In the special case of the inverse or reciprocal 1/B, where B=N(0,1) follows a standard normal distribution, the resulting distribution is a reciprocal standard normal distribution, and there is no definable variance.[13]

However, in the slightly more general case of a shifted reciprocal function {displaystyle 1/(p-B)} for B=N(mu ,sigma ) following a general normal distribution, then mean and variance statistics do exist in a principal value sense, if the difference between the pole p and the mean mu is real-valued.[14]

Ratios[edit]

Ratios are also problematic; normal approximations exist under certain conditions.

Example formulae[edit]

This table shows the variances and standard deviations of simple functions of the real variables A,B!, with standard deviations {displaystyle sigma _{A},sigma _{B},,} covariance sigma _{{AB}}=rho _{{AB}}sigma _{A}sigma _{B},, and correlation rho _{{AB}}.
The real-valued coefficients a and b are assumed exactly known (deterministic), i.e., sigma _{a}=sigma _{b}=0.

In the columns «Variance» and «Standard Deviation», A and B should be understood as expectation values (i.e. values around which we’re estimating the uncertainty), and f should be understood as the value of the function calculated at the expectation value of A,B!.

Function Variance Standard Deviation
f=aA, sigma _{f}^{2}=a^{2}sigma _{A}^{2} sigma_f = |a|sigma_A
{displaystyle f=aA+bB,} sigma _{f}^{2}=a^{2}sigma _{A}^{2}+b^{2}sigma _{B}^{2}+2ab,sigma _{{AB}} sigma _{f}={sqrt  {a^{2}sigma _{A}^{2}+b^{2}sigma _{B}^{2}+2ab,sigma _{{AB}}}}
{displaystyle f=aA-bB,} sigma _{f}^{2}=a^{2}sigma _{A}^{2}+b^{2}sigma _{B}^{2}-2ab,sigma _{{AB}} sigma _{f}={sqrt  {a^{2}sigma _{A}^{2}+b^{2}sigma _{B}^{2}-2ab,sigma _{{AB}}}}
{displaystyle f=A-B,} {displaystyle sigma _{f}^{2}=sigma _{A}^{2}+sigma _{B}^{2}-2sigma _{AB}} {displaystyle sigma _{f}={sqrt {sigma _{A}^{2}+sigma _{B}^{2}-2sigma _{AB}}}}
f=AB, {displaystyle sigma _{f}^{2}approx f^{2}left[left({frac {sigma _{A}}{A}}right)^{2}+left({frac {sigma _{B}}{B}}right)^{2}+2{frac {sigma _{AB}}{AB}}right]}[15][16] {displaystyle sigma _{f}approx left|fright|{sqrt {left({frac {sigma _{A}}{A}}right)^{2}+left({frac {sigma _{B}}{B}}right)^{2}+2{frac {sigma _{AB}}{AB}}}}}
f={frac  {A}{B}}, sigma _{f}^{2}approx f^{2}left[left({frac  {sigma _{A}}{A}}right)^{2}+left({frac  {sigma _{B}}{B}}right)^{2}-2{frac  {sigma _{{AB}}}{AB}}right][17] sigma _{f}approx left|fright|{sqrt  {left({frac  {sigma _{A}}{A}}right)^{2}+left({frac  {sigma _{B}}{B}}right)^{2}-2{frac  {sigma _{{AB}}}{AB}}}}
f=aA^{{b}}, sigma _{f}^{2}approx left({a}{b}{A}^{{b-1}}{sigma _{A}}right)^{2}=left({frac  {{f}{b}{sigma _{A}}}{A}}right)^{2} sigma _{f}approx left|{a}{b}{A}^{{b-1}}{sigma _{A}}right|=left|{frac  {{f}{b}{sigma _{A}}}{A}}right|
f=aln(bA), sigma _{f}^{2}approx left(a{frac  {sigma _{A}}{A}}right)^{2}[18] sigma _{f}approx left|a{frac  {sigma _{A}}{A}}right|
{displaystyle f=alog _{10}(bA),} sigma _{f}^{2}approx left(a{frac  {sigma _{A}}{Aln(10)}}right)^{2}[18] sigma _{f}approx left|a{frac  {sigma _{A}}{Aln(10)}}right|
f=ae^{{bA}}, sigma _{f}^{2}approx f^{2}left(bsigma _{A}right)^{2}[19] {displaystyle sigma _{f}approx left|fright|left|left(bsigma _{A}right)right|}
f=a^{{bA}}, {displaystyle sigma _{f}^{2}approx f^{2}(bln(a)sigma _{A})^{2}} {displaystyle sigma _{f}approx left|fright|left|(bln(a)sigma _{A})right|}
{displaystyle f=asin(bA),} {displaystyle sigma _{f}^{2}approx left[abcos(bA)sigma _{A}right]^{2}} {displaystyle sigma _{f}approx left|abcos(bA)sigma _{A}right|}
{displaystyle f=acos left(bAright),} {displaystyle sigma _{f}^{2}approx left[absin(bA)sigma _{A}right]^{2}} {displaystyle sigma _{f}approx left|absin(bA)sigma _{A}right|}
{displaystyle f=atan left(bAright),} {displaystyle sigma _{f}^{2}approx left[absec ^{2}(bA)sigma _{A}right]^{2}} {displaystyle sigma _{f}approx left|absec ^{2}(bA)sigma _{A}right|}
f=A^{B}, sigma _{f}^{2}approx f^{2}left[left({frac  {B}{A}}sigma _{A}right)^{2}+left(ln(A)sigma _{B}right)^{2}+2{frac  {Bln(A)}{A}}sigma _{{AB}}right] sigma _{f}approx left|fright|{sqrt  {left({frac  {B}{A}}sigma _{A}right)^{2}+left(ln(A)sigma _{B}right)^{2}+2{frac  {Bln(A)}{A}}sigma _{{AB}}}}
{displaystyle f={sqrt {aA^{2}pm bB^{2}}},} {displaystyle sigma _{f}^{2}approx left({frac {A}{f}}right)^{2}a^{2}sigma _{A}^{2}+left({frac {B}{f}}right)^{2}b^{2}sigma _{B}^{2}pm 2ab{frac {AB}{f^{2}}},sigma _{AB}} {displaystyle sigma _{f}approx {sqrt {left({frac {A}{f}}right)^{2}a^{2}sigma _{A}^{2}+left({frac {B}{f}}right)^{2}b^{2}sigma _{B}^{2}pm 2ab{frac {AB}{f^{2}}},sigma _{AB}}}}

For uncorrelated variables (rho _{{AB}}=0, {displaystyle sigma _{AB}=0}) expressions for more complicated functions can be derived by combining simpler functions. For example, repeated multiplication, assuming no correlation, gives

{displaystyle f=ABC;qquad left({frac {sigma _{f}}{f}}right)^{2}approx left({frac {sigma _{A}}{A}}right)^{2}+left({frac {sigma _{B}}{B}}right)^{2}+left({frac {sigma _{C}}{C}}right)^{2}.}

For the case f=AB we also have Goodman’s expression[7] for the exact variance: for the uncorrelated case it is

V(XY)=E(X)^{2}V(Y)+E(Y)^{2}V(X)+E((X-E(X))^{2}(Y-E(Y))^{2})

and therefore we have:

sigma _{f}^{2}=A^{2}sigma _{B}^{2}+B^{2}sigma _{A}^{2}+sigma _{A}^{2}sigma _{B}^{2}

Effect of correlation on differences[edit]

If A and B are uncorrelated, their difference A-B will have more variance than either of them. An increasing positive correlation ({displaystyle rho _{AB}to 1}) will decrease the variance of the difference, converging to zero variance for perfectly correlated variables with the same variance. On the other hand, a negative correlation ({displaystyle rho _{AB}to -1}) will further increase the variance of the difference, compared to the uncorrelated case.

For example, the self-subtraction f=A-A has zero variance {displaystyle sigma _{f}^{2}=0} only if the variate is perfectly autocorrelated ({displaystyle rho _{A}=1}). If A is uncorrelated, {displaystyle rho _{A}=0}, then the output variance is twice the input variance, {displaystyle sigma _{f}^{2}=2sigma _{A}^{2}}. And if A is perfectly anticorrelated, {displaystyle rho _{A}=-1}, then the input variance is quadrupled in the output, {displaystyle sigma _{f}^{2}=4sigma _{A}^{2}} (notice {displaystyle 1-rho _{A}=2} for f = aA — aA in the table above).

Example calculations[edit]

Inverse tangent function[edit]

We can calculate the uncertainty propagation for the inverse tangent function as an example of using partial derivatives to propagate error.

Define

f(x)=arctan(x),

where Delta_x is the absolute uncertainty on our measurement of x. The derivative of f(x) with respect to x is

{frac  {df}{dx}}={frac  {1}{1+x^{2}}}.

Therefore, our propagated uncertainty is

{displaystyle Delta _{f}approx {frac {Delta _{x}}{1+x^{2}}},}

where {displaystyle Delta _{f}} is the absolute propagated uncertainty.

Resistance measurement[edit]

A practical application is an experiment in which one measures current, I, and voltage, V, on a resistor in order to determine the resistance, R, using Ohm’s law, R = V / I.

Given the measured variables with uncertainties, I ± σI and V ± σV, and neglecting their possible correlation, the uncertainty in the computed quantity, σR, is:

sigma_R approx sqrt{ sigma_V^2 left(frac{1}{I}right)^2 + sigma_I^2 left(frac{-V}{I^2}right)^2 } = Rsqrt{ left(frac{sigma_V}{V}right)^2 + left(frac{sigma_I}{I}right)^2 }.

See also[edit]

  • Accuracy and precision
  • Automatic differentiation
  • Bienaymé’s identity
  • Delta method
  • Dilution of precision (navigation)
  • Errors and residuals in statistics
  • Experimental uncertainty analysis
  • Interval finite element
  • Measurement uncertainty
  • Numerical stability
  • Probability bounds analysis
  • Significance arithmetic
  • Uncertainty quantification
  • Random-fuzzy variable
  • Variance#Propagation

References[edit]

  1. ^ Kirchner, James. «Data Analysis Toolkit #5: Uncertainty Analysis and Error Propagation» (PDF). Berkeley Seismology Laboratory. University of California. Retrieved 22 April 2016.
  2. ^ Kroese, D. P.; Taimre, T.; Botev, Z. I. (2011). Handbook of Monte Carlo Methods. John Wiley & Sons.
  3. ^ Ranftl, Sascha; von der Linden, Wolfgang (2021-11-13). «Bayesian Surrogate Analysis and Uncertainty Propagation». Physical Sciences Forum. 3 (1): 6. doi:10.3390/psf2021003006. ISSN 2673-9984.
  4. ^ Atanassova, E.; Gurov, T.; Karaivanova, A.; Ivanovska, S.; Durchova, M.; Dimitrov, D. (2016). «On the parallelization approaches for Intel MIC architecture». AIP Conference Proceedings. 1773 (1): 070001. Bibcode:2016AIPC.1773g0001A. doi:10.1063/1.4964983.
  5. ^ Cunha Jr, A.; Nasser, R.; Sampaio, R.; Lopes, H.; Breitman, K. (2014). «Uncertainty quantification through the Monte Carlo method in a cloud computing setting». Computer Physics Communications. 185 (5): 1355–1363. arXiv:2105.09512. Bibcode:2014CoPhC.185.1355C. doi:10.1016/j.cpc.2014.01.006. S2CID 32376269.
  6. ^ Lin, Y.; Wang, F.; Liu, B. (2018). «Random number generators for large-scale parallel Monte Carlo simulations on FPGA». Journal of Computational Physics. 360: 93–103. Bibcode:2018JCoPh.360…93L. doi:10.1016/j.jcp.2018.01.029.
  7. ^ a b Goodman, Leo (1960). «On the Exact Variance of Products». Journal of the American Statistical Association. 55 (292): 708–713. doi:10.2307/2281592. JSTOR 2281592.
  8. ^ Ochoa1,Benjamin; Belongie, Serge «Covariance Propagation for Guided Matching» Archived 2011-07-20 at the Wayback Machine
  9. ^ Ku, H. H. (October 1966). «Notes on the use of propagation of error formulas». Journal of Research of the National Bureau of Standards. 70C (4): 262. doi:10.6028/jres.070c.025. ISSN 0022-4316. Retrieved 3 October 2012.
  10. ^ Clifford, A. A. (1973). Multivariate error analysis: a handbook of error propagation and calculation in many-parameter systems. John Wiley & Sons. ISBN 978-0470160558.[page needed]
  11. ^ Soch, Joram (2020-07-07). «Variance of the linear combination of two random variables». The Book of Statistical Proofs. Retrieved 2022-01-29.
  12. ^ Lee, S. H.; Chen, W. (2009). «A comparative study of uncertainty propagation methods for black-box-type problems». Structural and Multidisciplinary Optimization. 37 (3): 239–253. doi:10.1007/s00158-008-0234-7. S2CID 119988015.
  13. ^ Johnson, Norman L.; Kotz, Samuel; Balakrishnan, Narayanaswamy (1994). Continuous Univariate Distributions, Volume 1. Wiley. p. 171. ISBN 0-471-58495-9.
  14. ^ Lecomte, Christophe (May 2013). «Exact statistics of systems with uncertainties: an analytical theory of rank-one stochastic dynamic systems». Journal of Sound and Vibration. 332 (11): 2750–2776. doi:10.1016/j.jsv.2012.12.009.
  15. ^ «A Summary of Error Propagation» (PDF). p. 2. Archived from the original (PDF) on 2016-12-13. Retrieved 2016-04-04.
  16. ^ «Propagation of Uncertainty through Mathematical Operations» (PDF). p. 5. Retrieved 2016-04-04.
  17. ^ «Strategies for Variance Estimation» (PDF). p. 37. Retrieved 2013-01-18.
  18. ^ a b Harris, Daniel C. (2003), Quantitative chemical analysis (6th ed.), Macmillan, p. 56, ISBN 978-0-7167-4464-1
  19. ^ «Error Propagation tutorial» (PDF). Foothill College. October 9, 2009. Retrieved 2012-03-01.

Further reading[edit]

  • Bevington, Philip R.; Robinson, D. Keith (2002), Data Reduction and Error Analysis for the Physical Sciences (3rd ed.), McGraw-Hill, ISBN 978-0-07-119926-1
  • Fornasini, Paolo (2008), The uncertainty in physical measurements: an introduction to data analysis in the physics laboratory, Springer, p. 161, ISBN 978-0-387-78649-0
  • Meyer, Stuart L. (1975), Data Analysis for Scientists and Engineers, Wiley, ISBN 978-0-471-59995-1
  • Peralta, M. (2012), Propagation Of Errors: How To Mathematically Predict Measurement Errors, CreateSpace
  • Rouaud, M. (2013), Probability, Statistics and Estimation: Propagation of Uncertainties in Experimental Measurement (PDF) (short ed.)
  • Taylor, J. R. (1997), An Introduction to Error Analysis: The Study of Uncertainties in Physical Measurements (2nd ed.), University Science Books
  • Wang, C M; Iyer, Hari K (2005-09-07). «On higher-order corrections for propagating uncertainties». Metrologia. 42 (5): 406–410. doi:10.1088/0026-1394/42/5/011. ISSN 0026-1394.

External links[edit]

  • A detailed discussion of measurements and the propagation of uncertainty explaining the benefits of using error propagation formulas and Monte Carlo simulations instead of simple significance arithmetic
  • GUM, Guide to the Expression of Uncertainty in Measurement
  • EPFL An Introduction to Error Propagation, Derivation, Meaning and Examples of Cy = Fx Cx Fx’
  • uncertainties package, a program/library for transparently performing calculations with uncertainties (and error correlations).
  • soerp package, a Python program/library for transparently performing *second-order* calculations with uncertainties (and error correlations).
  • Joint Committee for Guides in Metrology (2011). JCGM 102: Evaluation of Measurement Data — Supplement 2 to the «Guide to the Expression of Uncertainty in Measurement» — Extension to Any Number of Output Quantities (PDF) (Technical report). JCGM. Retrieved 13 February 2013.
  • Uncertainty Calculator Propagate uncertainty for any expression

When we measure a property such as length, weight, or time, we can introduce errors in our results. Errors, which produce a difference between the real value and the one we measured, are the outcome of something going wrong in the measuring process.

The reasons behind errors can be the instruments used, the people reading the values, or the system used to measure them.

If, for instance, a thermometer with an incorrect scale registers one additional degree every time we use it to measure the temperature, we will always get a measurement that is out by that one degree.

Because of the difference between the real value and the measured one, a degree of uncertainty will pertain to our measurements. Thus, when we measure an object whose actual value we dont know while working with an instrument that produces errors, the actual value exists in an uncertainty range.

The difference between uncertainty and error

The main difference between errors and uncertainties is that an error is the difference between the actual value and the measured value, while an uncertainty is an estimate of the range between them, representing the reliability of the measurement. In this case, the absolute uncertainty will be the difference between the larger value and the smaller one.

A simple example is the value of a constant. Lets say we measure the resistance of a material. The measured values will never be the same because the resistance measurements vary. We know there is an accepted value of 3.4 ohms, and by measuring the resistance twice, we obtain the results 3.35 and 3.41 ohms.

Errors produced the values of 3.35 and 3.41, while the range between 3.35 to 3.41 is the uncertainty range.

Lets take another example, in this case, measuring the gravitational constant in a laboratory.

The standard gravity acceleration is 9.81 m/s^2. In the laboratory, conducting some experiments using a pendulum, we obtain four values for g: 9.76 m/s^2, 9.6 m/s^2, 9.89m/s^2, and 9.9m/s^2. The variation in values is the product of errors. The mean value is 9.78m/s^2.

The uncertainty range for the measurements reaches from 9.6 m/s^2, to 9.9 m/s^2 while the absolute uncertainty is approximately equal to half of our range, which is equal to the difference between the maximum and minimum values divided by two.

The absolute uncertainty is reported as:

In this case, it will be:

What is the standard error in the mean?

The standard error in the mean is the value that tells us how much error we have in our measurements against the mean value. To do this, we need to take the following steps:

  1. Calculate the mean of all measurements.
  2. Subtract the mean from each measured value and square the results.
  3. Add up all subtracted values.
  4. Divide the result by the square root of the total number of measurements taken.

Lets look at an example.

You have measured the weight of an object four times. The object is known to weigh exactly 3.0kg with a precision of below one gram. Your four measurements give you 3.001 kg, 2.997 kg, 3.003 kg, and 3.002 kg. Obtain the error in the mean value.

First, we calculate the mean:

As the measurements have only three significant figures after the decimal point, we take the value as 3.000 kg. Now we need to subtract the mean from each value and square the result:

Again, the value is so small, and we are only taking three significant figures after the decimal point, so we consider the first value to be 0. Now we proceed with the other differences:

All our results are 0 as we only take three significant figures after the decimal point. When we divide this between the root square of the samples, which is √4, we get:

In this case, the standard error of the mean (σx) is almost nothing.

What are calibration and tolerance?

Tolerance is the range between the maximum and minimum allowed values for a measurement. Calibration is the process of tuning a measuring instrument so that all measurements fall within the tolerance range.

To calibrate an instrument, its results are compared against other instruments with higher precision and accuracy or against an object whose value has very high precision.

One example is the calibration of a scale.

To calibrate a scale, you must measure a weight that is known to have an approximate value. Lets say you use a mass of one kilogram with a possible error of 1 gram. The tolerance is the range 1.002kg to 0.998kg. The scale consistently gives a measure of 1.01kg. The measured weight is above the known value by 8 grams and also above the tolerance range. The scale does not pass the calibration test if you want to measure weights with high precision.

How is uncertainty reported?

When doing measurements, uncertainty needs to be reported. It helps those reading the results to know the potential variation. To do this, the uncertainty range is added after the symbol ±.

Lets say we measure a resistance value of 4.5ohms with an uncertainty of 0.1ohms. The reported value with its uncertainty is 4.5 ± 0.1 ohms.

We find uncertainty values in many processes, from fabrication to design and architecture to mechanics and medicine.

What are absolute and relative errors?

Errors in measurements are either absolute or relative. Absolute errors describe the difference from the expected value. Relative errors measure how much difference there is between the absolute error and the true value.

Absolute error

Absolute error is the difference between the expected value and the measured one. If we take several measurements of a value, we will obtain several errors. A simple example is measuring the velocity of an object.

Lets say we know that a ball moving across the floor has a velocity of 1.4m/s. We measure the velocity by calculating the time it takes for the ball to move from one point to another using a stopwatch, which gives us a result of 1.42m/s.

The absolute error of your measurement is 1.42 minus 1.4.

Relative error

Relative error compares the measurement magnitudes. It shows us that the difference between the values can be large, but it is small compared to the magnitude of the values. Lets take an example of absolute error and see its value compared to the relative error.

You use a stopwatch to measure a ball moving across the floor with a velocity of 1.4m/s. You calculate how long it takes for the ball to cover a certain distance and divide the length by the time, obtaining a value of 1.42m/s.

As you can see, the relative error is smaller than the absolute error because the difference is small compared to the velocity.

Another example of the difference in scale is an error in a satellite image. If the image error has a value of 10 metres, this is large on a human scale. However, if the image measures 10 kilometres height by 10 kilometres width, an error of 10 metres is small.

The relative error can also be reported as a percentage after multiplying by 100 and adding the percentage symbol %.

Plotting uncertainties and errors

Uncertainties are plotted as bars in graphs and charts. The bars extend from the measured value to the maximum and minimum possible value. The range between the maximum and the minimum value is the uncertainty range. See the following example of uncertainty bars:

Uncertainty and Error in Measurements. Plot showing uncertainties. StudySmarterFigure 1. Plot showing the mean value points of each measurement. The bars extending from each point indicate how much the data can vary. Source: Manuel R. Camacho, StudySmarter.

See the following example using several measurements:

You carry out four measurements of the velocity of a ball moving 10 metres whose speed is decreasing as it advances. You mark 1-metre divisions, using a stopwatch to measure the time it takes for the ball to move between them.

You know that your reaction to the stopwatch is around 0.2m/s. Measuring the time with the stopwatch and dividing by the distance, you obtain values equal to 1.4m/s, 1.22m/s, 1.15m/s, and 1.01m/s.

Because the reaction to the stopwatch is delayed, producing an uncertainty of 0.2m/s, your results are 1.4 ± 0.2 m/s, 1.22 ± 0.2 m/s, 1.15 ± 0.2 m/s, and 1.01 ± 0.2m/s.

The plot of the results can be reported as follows:

Uncertainty and Error in Measurements. Plot showing uncertainties. StudySmarterFigure 2. The plot shows an approximate representation. The dots represent the actual values of 1.4m/s, 1.22m/s, 1.15m/s, and 1.01m/s. The bars represent the uncertainty of ±0.2m/s. Source: Manuel R. Camacho, StudySmarter.

How are uncertainties and errors propagated?

Each measurement has errors and uncertainties. When we carry out operations with values taken from measurements, we add these uncertainties to every calculation. The processes by which uncertainties and errors change our calculations are called uncertainty propagation and error propagation, and they produce a deviation from the actual data or data deviation.

There are two approaches here:

  1. If we are using percentage error, we need to calculate the percentage error of each value used in our calculations and then add them together.
  2. If we want to know how uncertainties propagate through the calculations, we need to make our calculations using our values with and without the uncertainties.

The difference is the uncertainty propagation in our results.

See the following examples:

Lets say you measure gravity acceleration as 9.91 m/s^2, and you know that your value has an uncertainty of ± 0.1 m/s^2.

You want to calculate the force produced by a falling object. The object has a mass of 2kg with an uncertainty of 1 gram or 2 ± 0.001 kg.

To calculate the propagation using percentage error, we need to calculate the error of the measurements. We calculate the relative error for 9.91 m/s^2 with a deviation of (0.1 + 9.81) m/s^2.

Multiplying by 100 and adding the percentage symbol, we get 1%. If we then learn that the mass of 2kg has an uncertainty of 1 gram, we calculate the percentage error for this, too, getting a value of 0.05%.

To determine the percentage error propagation, we add together both errors.

To calculate the uncertainty propagation, we need to calculate the force as F = m * g. If we calculate the force without the uncertainty, we obtain the expected value.

Now we calculate the value with the uncertainties. Here, both uncertainties have the same upper and lower limits ± 1g and ± 0.1 m/s2.

We can round this number to two significant digits as 19.83 Newtons. Now We subtract both results.

The result is expressed as expected value ± uncertainty value.

If we use values with uncertainties and errors, we need to report this in our results.

Reporting uncertainties

To report a result with uncertainties, we use the calculated value followed by the uncertainty. We can choose to put the quantity inside a parenthesis. Here is an example of how to report uncertainties.

We measure a force, and according to our results, the force has an uncertainty of 0.21 Newtons.

Our result is 19.62 Newtons, which has a possible variation of plus or minus 0.21 Newtons.

Propagation of uncertainties

See the following general rules on how uncertainties propagate and how to calculate uncertainties. For any propagation of uncertainty, values must have the same units.

Addition and subtraction: if values are being added or subtracted, the total value of the uncertainty is the result of the addition or subtraction of the uncertainty values. If we have measurements (A ± a) and (B ± b), the result of adding them is A + B with a total uncertainty (± a) + (± b).

Lets say we are adding two pieces of metal with lengths of 1.3m and 1.2m. The uncertainties are ± 0.05m and ± 0.01m. The total value after adding them is 1.5m with an uncertainty of ± (0.05m + 0.01m) = ± 0.06m.

Multiplication by an exact number: the total uncertainty value is calculated by multiplying the uncertainty by the exact number.

Lets say we are calculating the area of a circle, knowing the area is equal to A = 2 * 3.1415 • r. We calculate the radius as r = 1 ± 0.1m. The uncertainty is 2 • 3.1415•1 ± 0.1m, giving us an uncertainty value of 0.6283m.

Division by an exact number: the procedure is the same as in multiplication. In this case, we divide the uncertainty by the exact value to obtain the total uncertainty.

If we have a length of 1.2m with an uncertainty of ± 0.03m and divide this by 5, the uncertainty is ± 0.03 / 5 or ±0.006.

Data deviation

We can also calculate the deviation of data produced by the uncertainty after we make calculations using the data. The data deviation changes if we add, subtract, multiply, or divide the values. Data deviation uses the symbol δ.

  • Data deviation after subtraction or addition: to calculate the deviation of the results, we need to calculate the square root of the squared uncertainties:

  • Data deviation after multiplication or division: to calculate the data deviation of several measurements, we need the uncertaintyreal value ratio and then calculate the square root of the squared terms. See this example using measurements A ± a and B ± b:

If we have more than two values, we need to add more terms.

  • Data deviation if exponents are involved: we need to multiply the exponent by the uncertainty and then apply the multiplication and division formula. If we have y = (A ± a) 2 * (B ± b) 3, the deviation will be:

If we have more than two values, we need to add more terms.

Rounding numbers

When errors and uncertainties are either very small or very large, it is convenient to remove terms if they do not alter our results. When we round numbers, we can round up or down.

Measuring the value of the gravity constant on earth, our value is 9.81 m/s^2, and we have an uncertainty of ± 0.10003m/s^2. The value after the decimal point varies our measurement by 0.1m/s^2; However, the last value of 0.0003 has a magnitude so small that its effect would be barely noticeable. We can, therefore, round up by removing everything after 0.1.

Rounding integers and decimals

To round numbers, we need to decide what values are important depending on the magnitude of the data.

There are two options when rounding numbers, rounding up or down. The option we choose depends on the number after the digit we think is the lowest value that is important for our measurements.

  • Rounding up: we eliminate the numbers that we think are not necessary. A simple example is rounding up 3.25 to 3.3.
  • Rounding down: again, we eliminate the numbers that we think are not necessary. An example is rounding down 76.24 to 76.2.
  • The rule when rounding up and down: as a general rule, when a number ends in any digit between 1 and 5, it will be rounded down. If the digit ends between 5 and 9, it will be rounded up, while 5 is also always rounded up. For instance, 3.16 and 3.15 become 3.2, while 3.14 becomes 3.1.

By looking at the question, you can often deduce how many decimal places (or significant figures) are needed. Lets say you are given a plot with numbers that have only two decimal places. You would then also be expected to include two decimal places in your answers.

Round quantities with uncertainties and errors

When we have measurements with errors and uncertainties, the values with higher errors and uncertainties set the total uncertainty and error values. Another approach is required when the question asks for a certain number of decimals.

Lets say we have two values (9.3 ± 0.4) and (10.2 ± 0.14). If we add both values, we also need to add their uncertainties. The addition of both values gives us the total uncertainty as | 0.4 | + | 0.14 | or ± 0.54. Rounding 0.54 to the nearest integer gives us 0.5 as 0.54 is closer to 0.5 than to 0.6.

Therefore, the result of adding both numbers and their uncertainties and rounding the results is 19.5 ± 0.5m.

Lets say you are given two values to multiply, and both have uncertainties. You are asked to calculate the total error propagated. The quantities are A = 3.4 ± 0.01 and B = 5.6 ± 0.1. The question asks you to calculate the error propagated up to one decimal place.

First, you calculate the percentage error of both:

The total error is 0.29% + 1.78% or 2.07%.

You have been asked to approximate only to one decimal place. The result can vary depending on whether you only take the first decimal or whether you round up this number.

Uncertainty and Error in Measurements — Key takeaways

  • Uncertainties and errors introduce variations in measurements and their calculations.
  • Uncertainties are reported so that users can know how much the measured value can vary.
  • There are two types of errors, absolute errors and relative errors. An absolute error is the difference between the expected value and the measured one. A relative error is the comparison between the measured and the expected values.
  • Errors and uncertainties propagate when we make calculations with data that has errors or uncertainties.
  • When we use data with uncertainties or errors, the data with the largest error or uncertainty dominates the smaller ones. It is useful to calculate how the error propagates, so we know how reliable our results are.

Понравилась статья? Поделить с друзьями:
  • Uncaught syntaxerror unexpected identifier как исправить
  • Uncaught soapfault exception wsdl soap error parsing wsdl couldn t load from
  • Uncaught runtimeerror memory access out of bounds как исправить
  • Uncaught referenceerror jquery is not defined как исправить
  • Uncaught in promise typeerror fullscreen error