With the publication of the previous installment (American Laboratory, July 2003), this series of articles has arrived at a logical spot for reviewing what has been presented to date. The past material can be considered introductory for exploring the main task in calibration studies (i.e., diagnosing or analyzing the calibration data to determine the appropriate model and its inherent uncertainties). Before proceeding to this next topic (which will be discussed in considerable detail), it seems prudent to summarize the background material as well as clarify some confusion that was spotted by various readers in part 3 (American Laboratory, January 2003). This article will try to achieve both goals.
Perhaps the overall theme thus far has been “Risk and Uncertainty.” Both are an inescapable fact of life, and the two go hand in hand. People must determine how much risk is acceptable, where risk is a combination of likelihood and severity of an outcome. In statistical terms, the “likelihood” part translates to the confidence level that is needed; typical confidence levels chosen are 95% or 99%. Once the confidence level has been selected, statistics (and this column) can guide the calculation of the uncertainty in a given set of measurements.
After the confidence level has been determined, the user can decide if the results are acceptable for the application at hand. The real-number line has been divided (somewhat artificially) into segments that take into account the amount of uncertainty present. The main dividing lines are 1) the critical level (LC), below which a measurement cannot be distinguished (statistically) from zero; 2) the detection limit (DL), below which a measurement cannot be detected reliably; and 3) the quantitation limit (QL), below which a measurement value cannot be reported reliably. The reason that such low-level measurements are not reliable is because there is too much uncertainty associated with them. Clearly, analysts would prefer to make measurements in regions in which the relative uncertainty is minimal. As Dr. William Horwitz once stated, “In almost all cases when dealing with a limit of detection or limit of determination, the primary purpose of determining that limit is to stay away from it.”1
As readers of this journal are well aware, measurements are the bread and butter of analytical chemistry. Thus, knowledge of associated uncertainties is critical. With most analytical instruments, a vital (and ever-present) source of variation is the calibration procedure. (In this discussion, it is assumed that the dominant source of uncertainty is calibration itself; such a situation is not always the case.) Since the computer typically generates data in customer-unfriendly units such as peak area or absorbance, the analyst must find a way to transform the raw numbers into useful readings (typically, concentration). Calibration is the name given to the procedure that involves 1) preparing and analyzing a set of standards, 2) collecting the raw-response data, and 3) finding a statistical model to fit these data and predict (the reverse of calibrate) unknown samples.
Before step 1 can be conducted, the calibration study must be designed. This process involves selecting 1) the actual concentrations or levels that will be included, and 2) the number of replicates that will be analyzed for each standard. The levels chosen should bracket the expected working range for the procedure (no extrapolation, please). In addition, there should be sufficient concentrations to determine the behavior of the data (i.e., deciding if a straight line is adequate or if a higher-order model is needed). Closely spaced levels should be included in areas in which 1) detection is an issue (and a low detection limit is desired), 2) curvature is expected, and/or 3) critical measurements must be made.
Generally, there should be enough replicates of each standard to allow for modeling of the standard deviation (i.e., to determine if the variability is changing with concentration). Also, if high precision is desired in any or all of the concentration range, then extra replicates should be added.
A good starting place (not an absolute truth, just a guide that is adjusted as circumstances warrant) for calibration design is a 5 × 5 study (i.e., five replicates of each of five concentrations). However, careful determination of the objectives of the study (and how to accomplish the goals) is crucial to the success of any calibration work.
In step 3, the choice of a model will depend mainly on whether there is curvature in one or more regions of the range. The choice of a fitting technique depends on whether or not the standard deviation changes with concentration. If the variability is not constant over the entire range, then ordinary least squares should not be used as the fitting technique; weighted least squares (WLS) should be used instead. WLS is particularly more suitable if there is a systematic change in variability with respect to concentration, such as variability that increases with increasing concentration. With WLS, weights are calculated and applied to the data. The result is that the noisy data are not allowed to influence the fit as much as are the better-behaved measurements.
Associated with any chosen combination of model and fitting technique is a prediction interval, the width of which depends on the variability in the data and the confidence level desired. This interval provides a statistically sound estimate of the uncertainty in the next measurement (typically of a sample) predicted from the calibration curve. Both the predicted value and its related uncertainty should be reported for any analyzed sample. Only with both pieces of information can the user decide if the value is “tight” enough for his or her purposes.
Figure 1 - Plot of reported concentration (obtained from the appropriate calibration
curve) versus true concentration. The reported concentrations are for samples
that have been spiked with known concentrations and are then analyzed. Legend: 1)
solid line—45° line, representing the situation where reported concentration always
equals true concentration; 2) dashed line—reported-versus-true line; 3) dotted
line—uncertainty interval around the reported-versus-true line; 4) dot—individual
reported-versus-true measurement for a given spiked sample.
In any discipline, terminology is developed to define various concepts that are specific to that area. Statistics is no exception. Six terms were discussed in the third article of the series (American Laboratory, January 2003). Those concepts will be presented again, this time in conjunction with a concentration-related figure (see Figure 1).
It should be noted that the numerical value of any of the terms may vary as the true value being measured changes; it is generally not the case that bias, precision, etc., are the same throughout the entire working range of a measurement system.
It also should be noted that “error” refers to a measurement itself, while the rest of the terms pertain to a measurement system.
- Error (of a measurement)—the (usually) unknown difference between a single reported measurement and the true value; it can include bias and noise
- Bias (of a measurement system)—the systematic (or average) difference between reported values and the true value
- Noise (of a measurement system)—measurement variation over time or space, even with no change in the true value; sometimes called random error or stochastic error
- Precision (of a measurement system)—the consistency of measurements over time or space; precision is the reciprocal of noise; high precision is equivalent to low noise, and low precision is equivalent to high noise
- Uncertainty (of a measurement system)—a statistical interval within which measurement errors are believed to occur, at some level of confidence; sometimes reported as plus or minus the half-width of the interval; accounts for noise and (ideally) bias
- Resolution—the smallest discernible difference between any two measurements that are reported within the working range of the method.
- Detection in analytical chemistry. Importance, theory, and practice. Currie LA, ed. Am. Chem. Soc., Symposium Series 361; 1988:310.
Mr. Coleman is an Applied Statistician, Alcoa Technical Center, MST-C, 100 Technical Dr., Alcoa Center, PA 15069, U.S.A.; e-mail: [email protected]. Ms. Vanatta is an Analytical Chemist, Air Liquide-Balazs™ Analytical Services, Box 650311, MS 301, Dallas, TX 75265, U.S.A.; tel: 972-995-7541; fax: 972-995-3204; e-mail: [email protected].