Error Analysis Notes
From Physics 111-Lab Wiki
In the 111-lab, the experiment does not end when you have finished collecting your data. In many labs, you will be required to perform a detailed analysis of the data you have acquired. The point of any scientific experiment is to make quantitative statements about the properties of the physical world. A common question is, are your measurements consistent with a particular theory or not? This question can only be answered by careful analysis, including both systematic uncertainties and statistical error.
The goals of this exercise are twofold. One is to familiarize students with the basics of error analysis. Ideally, this will serve as a guide during the acquisition and analysis of data throughout the advanced lab. The second goal is to introduce students to the Matlab numerical computing environment, which you will be using throughout the semester.
Before starting on EAX,
- View the video: Introduction to error analysis
- Matlab is the accepted program to use for Physics 111-Lab. Look over the Intro to Matlab section.There are some MatLab scripts for doing curve fitting: 'MatLab Scripts for curve fitting'.
- Run through one of the several good texts on error analysis. There is no substitute for this reading. We recommend L. Lyons, "A Practical Guide to Data Analysis for Physical Science Students", Cambridge University Press (1994) (Note that you can access it only via the campus-network. To set up access from outside the campus see http://www.lib.berkeley.edu/Help/proxy.html).
- Please see the due date for this exercise at Advanced Lab Experiments and Due Dates
Error of a Single Parameter
As we read about scientific or technical results, we often see the value of a measured parameter as written something like 2.10 ± 0.05. The value 2.10 in this example is usually the average of several measurements because successive measurements of any particular quantity are never exactly the same. Where does ± 0.05 come from, and what does it mean?
When we make a measurement and get a value for a single quantity or parameter, we want to know how likely it is to be correct. So we measure it again and again, say N times in all, and look at the distribution or spread of the measurements. For example, we might have y = 2.10 as the average of the 5 measurements yi = 2.18, 2.03, 1.95, 2.24, 2.10,
We then calculate the error of the mean, with the equation
Loosely speaking, this means that if we make another measurement, there is a 67% likelihood that the value of the additional measurement will fall within a range from 2.05 to 2.15.
In experiments like Muon Lifetime, we record the number of events or counts in a given time interval, where the events are random, as in radioactive decay. Suppose we get N counts in a fixed time interval. If we take other samples for the same interval, we get some close but different values of N. If these events occur with a known mean rate and independent of the time since the last event. The sample of these events will follow the Poisson distribution with the standard deviation given by . So, we can make a single record of counts and write N ± for the counts in a given interval of time. The larger the N, the larger the standard deviation, but the fractional error grows smaller as the number of counts grows larger since the fractional error is . For example, to get a fractional error of 1%, N must be 10 000!
Curve Fitting with Two Parameters (Linear Regression)
In many experiments, we have to determine the relationship between parameters, one being the independent parameter. One approach is to fit a smooth analytical curve, a function of the parameters, to the datum points and extract the most likely values of the parameters from the fit. In cases where the curve is a straight line, we want to fit a data set to determine the slope and intercept of the line, each of which has some physical significance.
We start with the equation y = mx + b = f(x). We assume that the independent variable is x and we record a datum point yi at each value xi. We then select a straight line and adjust its slope "m" and intercept "b" to get a "best fit" to the measured yi with an error σi. By definition, we get a best fit when the sum of the squares of the deviations of the fitted curve from the datum points at each xi is a minimum. In mathematical terms, adjust m and b so as to get a minimum of the sum:
We do this by taking the derivatives of the sum, χ2, with respect to m and b and setting them to zero. By solving the simultaneous equations, we obtain the "best" values of m and b. This is called a "least-squares fit". These values will have errors that can be calculated with techniques described in the references. Lyons gives a worked-out example.
Note that you can access the first three references only via the campus-network. To set up access from outside the campus see http://www.lib.berkeley.edu/Help/proxy.html.
- L. Lyons, "A Practical Guide to Data Analysis for Physical Science Students", Cambridge University Press (1994). Louis Lyons.
- Yardley Beers, "Introduction to the Theory of Error", Addison Wesley Publishers, (1957), Yardley Beers This is a short book which is very good.
- P. Bevington, '"Data Reduction and Error Analysis for the Physical Sciences", McGraw-Hill. [An old standard that is pretty dry but straightforward. Chapter 5 is particularly important.] Computer Programs available:McGrawHill
- A. C. Melissinos and J. Napolitano, "Experiments in Modern Physics, 2nd Edition", Academic Press (2003).
- W. H. Press, et al., "Numerical Recipes in C: The Art of Scientific Computing, 2nd Edition", Cambridge University Press (1992); refer to Ch. 14—"Modeling of Data". [The Numerical Recipes in Pascal or FORTRAN books contain identical information. This book is the standard reference for doing scientific work on computers. Chapter 14 has a good introduction to the method of maximum likelihood, chi–square fitting, modeling data in general, error estimates of fit parameters, and, important for later experiments, the Monte Carlo method of simulation.]
We also have copies of Books 1, 2, 3, and 4 available in the Physics 111-Lab.
We want to measure the specific activity (number of decays per second) of a radioactive source so that we can use it to calibrate the equipment of the gamma-ray experiment. We use an electronic counter and a timer to measure the number of decays in a given time interval. In round numbers we measure 2000 decays in 10 minutes of observation. For how long should we measure in order to obtain a specific activity with a precision of 2%? Explain.
You are given two measurements of distance and their associated uncertainties: and . Assuming A and B are not correlated, calculate the propagated uncertainty in the (a) total distance A + B, (b) difference A - B, (c) the perimeter 2A + 2B and (d) the area . You must show you calcuations, not just quoting the formulae from any textbooks or references.
In this problem we will be generating and analyzing lists of normally distributed random numbers. The distribution we are sampling has true mean 0 and standard deviation 1.
- If we sample this distribution N=5 times, what do we expect the mean to be? How about the standard deviation? Whats the error on the mean, i.e. the uncertainty of your estimated mean?
- Using Matlab, generate a list of N=5 normally distributed random numbers (the command randn(N,M) will generate M lists of length N). Calculate the mean, standard deviation and the error on the mean. Is this what you expected?
- Now find the mean and standard deviation for the means of M=1000 experiments of N=5 measurements each (that is the mean of the means and the standard deviation of the means). Plot a histogram of the distribution of the means. About how many experiments are within 2 sigma? Is this what you expected?
- Now repeat questions 1-3 for N=10,50,100,1000.
- How does the error on the mean of a single list correlate with the standard deviation of the means of many lists?
You are given a dataset (Media:Peak.zip) from a gamma-ray experiment consisting of ~1000 hits. For each hit, the energy of the gamma-ray is recorded. We will assume that the energies are randomly distributed about a common mean, and that each hit is uncorrelated to others. Read the dataset from the enclosed file and:
- Produce a histogram of the distribution of energies. Choose the number of bins wisely, i.e. so that the width of each bin is smaller than the width of the peak, and at the same time so that the number of entries in the most populated bin is relatively large. Since this plot represents randomly-collected data, plotting error bars would be appropriate (hint: use errorbar function in Matlab).
- Compute the mean and standard deviation of the distribution of energies and their statistical uncertainties.
- Fit the distribution to a Gaussian function, and compare the parameters of the fitted Gaussian with the mean and standard deviation computed above.
- How consistent is the distribution with a Gaussian? In other words, compare the histogram from (1) to the fitted curve, and compute a goodness-of-fit value, such as χ2 / df.
In the optical pumping experiment we measure the resonant frequency as a function of the applied current (local magnetic field). Consider a mock data set:
|Current I (Amps)||0.0||0.2||0.4||0.6||0.8||1.0||1.2||1.4||1.6||1.8||2.0||2.2|
|Frequency f (MHz)||0.14||0.60||1.21||1.94||2.47||3.07||3.83||4.16||4.68||5.60||6.31||6.78|
- Plot a graph of the pairs of values. Assuming a linear relationship between I and f, determine the slope and the intercept of the best-fit line using the least-squares method with equal weights, and draw the best-fit line through the data points in the graph.
- From what s/he knows about the equipment used to measure the resonant frequency, your lab partner hastily estimates the uncertainty in the measurement of f to be σf = 0.01 MHz. Estimate the probability that the straight line you found is an adequate description of the observed data if it is distributed with the uncertainty guessed by your lab partner. (Use Matlab to calculate it or look it up in a table. For example, see table C-4 in Bevington). What can you conclude from these results? Repeat the analysis assuming your partner estimated the uncertainty to be σf = 1 MHz. What can you conclude from these results?
- Assume that the best-fit line found in the previous exercise is a good fit to the data. Estimate the uncertainty in measurement of y from the scatter of the observed data about this line. Again, assume that all the data points have equal weight. Use this to estimate the uncertainty in both the slope and the intercept of the best-fit line. This is the technique you will use in the Optical Pumping lab to determine the uncertainties in the fit parameters.
- Now assume that the uncertainty in each value of f grows with f: σf = 0.03 + 0.03 * f (MHz). Determine the slope and the intercept of the best-fit line using the least-squares method with unequal weights (weighted least-squares fit).
This problem deals with an exponential distribution .
- What do you expect to be the mean of the distribution? What do you expect to be the standard deviation? (Note: The standard deviation is defined exactly as it is for a normal distribution, but the 1 sigma = 68% rule no longer applies to a exponential distribution.)
- In the muon lifetime experiment we obtain a histogram for the decay rate as a function of the time after the muon enters the detector and announces its presence. We expect the distribution (the histogram) to be described by an exponential function. Rather than fitting with an exponential function, it is more convenient to plot the logarithm of the decay rate as a function of time and then fit a straight line to it. Since each data point (xi,yi) has a statistical error, σi, associated with it, qualitatively, what happens to these errors when the semi-log histogram (xi,logyi) is plotted? Explain and illustrate. what happens, quantitatively? Assume yi is reasonably large.
- In a separate experiment, you calculate that . What is the E0 and its experimental bounds? (Note that .5 is not small compared to 2.1).