You are on page 1of 17

1010 INTRODUCTION

1010 A. Scope!and Application of Methods

Tbe procedures described in these standardsare intended fr widest possible application, hut when chemical sludgesor slurries
the examination of waters of a wide range of quality, including or other sampiesof highly unusual composition are encountered,
wateTsuitable fr domestic or industrial supplies, surface wateT, the methods of this manual may require modification or may be
ground wateT, cooling or circulating wateT,bOiler wateT, bOiler inappropriate.
feed wateT,treated and untreated municipal or industrial waste- Most of the methodsincluded hefe have been endorsedby reg-
wateT,and saline wateT.Tbe unity of the fjelds of wateTsupply, ulatory agencies. Procedural modification without formal ap-
receiving wateTquality, and wastewatertreatmentand disposal is proval may be unacceptableto a regulatory body.
recognizedby presenting methodsof analysisfr eachconstituent Tbe analysis of bulk chemicals received fr wateTtreatment is
in a single section fr all types of waters. not included herein. A committee of the American WateTWorks
An effort has been made to present methods that apply gen- Association prepares and issues standards fr wateT treatment
erally. Where alternative methods are necessaryfr sampIesof chemicals.
different composition, the basis fr selectingthe most appropriate Part 1000 contains information that is common to, or useful in,
method is presentedas clearly as possible.However, sampieswith labOratoriesdesiring to produce analytical results of known qual-
extreme concentrations or otherwise unusual compositions or ity, that is, of known accuracyand with known uncertainty in that
characteristicsmay presentdifficulties that preclude the direct use accuracy.To accomplishthis, apply the quality assurancemethods
of these methods. Hence, some modification of a proceduremay described herein to the standardmethods described elsewherein
be necessaryin specific instances.Whenever a procedureis mod- this publication. Other sections of Part 1000 addresslabOratory
ified, the analyst should state plainly the nature of modification equipment, labOratory safety, sampling procedures,and method
in the report of results. development and validation, all of which provide necessaryin-
Certain proceduresare intended fr use with sludgesand sed- formation.
iments. Here again, the effort has been to presentmethods of the

1010 B. Statistics
1. NormalDistribution analyst can only estimatethe standarddeviation becausethe num-
her of observationsmade is finite; the estimate of 0" is denoted
If a measurementis repeated many times under essentially by s and is calculated as folio ws:
identical conditions, the results of each measurement,x, will be
distributed randomly abOut a mean value (arithmetic average) be- s = [I(x-::i)2/(n-l)]'fz
causeof uncontrollable or experimental error. If an infinite num-
her of such measurementswere to be accumulated,the individual Tb ta dard d . tI. fi th . ,or spread,oef th normaI
values would be dlstributed
... . . to those shown In
In a curve sIffillar . es n
distribution andeVIa
alsoon xes aefixed
includes WIdthfraction of the values mak-
Figure 1010:1. Tbe left curve illustrates the Gaussianor normal ing up the ~urve. For example 68.27% of the measurementslie
distribution, ~h.ich is describedprecisely by the mean, ~,~d ~e between IJ. :t 10",95.45%betw'eenlJ.:t 20",and 99.70% between
~~dard deVIatIon,0". The mean,.~r average,of the dIstributIon IJ. :t 3<T.It is sufficiently accurateto state that 95% of the values
IS sImply the sum of all values divIded by the number of values are within :t 20"and 99% within :t 30".When values are assigned
so summed, i.e., IJ. = (k;x:;)/n. Because no measurementsare to the :t 0" multiples, they are confidence limits. For example, 10
repeatedan infinite number of times, an estimate of the mean is :t 4 indicates that the confidencelimits are 6 and 14, while values
made, using the same summation procedure hut with n equal to from 6 to 14 representthe confidence interval.
a finite number of repeatedmeasurements(10, or 20, or. . .). Tbis Another useful statistic is the standard error of the mean, <T,..,
estimateof IJ.is denoted by x. Tbe standarddeviation of the nor- which is the standarddeviation divided by the squarefOOtof the
mal distribution is defined as 0" = [k(x-lJ.)z/n]l/z. Again, the number of values, or <TIm.Tbis is an estimateof the accuracyof
1-2 INTRODUCTION (1000)

'Mode
,
I 'Median
I I
Mean
Median
'(;' Mode '(;'
c , c
Q) Q)
~ ~
co co
Q) Q)
'- '-
L1. : L1.

Class Class

Figure1010:1.Normal(Ieft)and skewed(right) distributions.

the mean and implies that another sampie from the same popu- 3. Rejection of Data
lation would have a mean within some multiple of this. Multiples
of this statistic include the same fraction of the values as stated Quite orten in aseries of measurements,Olle or more of the
above fr 0". In practice, a relatively small number of average results will differ greatly from the other values. Theoretically, no
values is available, so the confidence intervals of the mean are result should be rejected, becauseit may indicate either a faulty
expressedas x :t Islm where 1 hag the following values fr 95% technique that castsdoubt on all results or the presenceof a true
confidence intervals: variant in the distribution. In practice, reject the result of any
analysis in which a known error.has occurred. In environmental
studies, extremely high and low concentrationsof contaminants
n t n t may indicate the existenceof areaswith problems or areaswith
2 12.71 5 2.78 no contamination, so they should not be rejected arbitrarily.
3 4.30 10 2.26
4 3.18 ~ 1.96
TADLE1010:1.CRITICALVALUESFR5% AND 1% TESTSOF
DlSCORDANCYFR A SINGLE0UrL1ER IN ANORMAL SAMPLE
The use of 1 compensatesfr the tendency of a small number.of
values to underestimateuncertainty. For n > 15, it is common to Numberof Critical Va1ue
use 1 = 2 to estimate the 95% confidence interval. Measurements
S .ll th ... th I . d d d .. I n 5% 1%
ti ano er statistic IS e re atlve stan ar evlatlon, 0" IJ.,

with its estimate slx, also known as the coefficient of variation 3 1.15 1.15
(CV), which commonly is expressedas a percentage.This statistic 4 1.46 1.49
normalizes the standarddeviation and sometimesfacilitates mak- 5 1.67 1.75
ing direct comparisonsamong analysesthat include a wide range 6 1.82 1.94
of concentrations.For example, if analysesat low concentrations 7 1.94 2.10
yield a result of 10 :t 1.5 mg/L and at high concentrations100 :t 8 2.03 2.22
8 mg/L, the standarddeviations do not appearcomparable.How- 9 2.11 2.32
ever, the percent relative standarddeviations are 100 (1.5/10) = 10 ;.~~ ;.~~
1?:o and ~OO(8/1~) = ~%,whichindicate thesmallervaria-~; 2:37 .2:66
bll1ty obtalned by usrng thlS parameter. 15 2.41 2.71

2. Log-Normal Distribution
- 16
18
2.44
2.50
2.75
2.82
20 2.56 2.88
In many casesthe results obtained from analysis of environ- 30 2.74 3.10
mental sampieswill not be normally distributed, i.e., a graph of 40 2.87 3.24
the data will be obviously skewed, as shown at fight in Figure 50 2.96 3.34
1010:1, with the mode, median, and mean being distinctly differ- 60 3.03 3.41
T b . I I di .b . th u1 100 3.21 3.60
et. 00 tarn
logarithms andathen
near y norma
calculate x andstrl
s.utlon, convert
Theantilogarithmse res ts to
of these 120 327
..
366

two values are estimates of the geometric mean and the geometric Source:BARNETf.V. & T. LEWIS.
19~. Outliersin StatisticalData.JohnWiley &
standard deviation, Xg and Sg. Sons.New York. N.Y.
(1010)/Glossary
INTRODUCTION 1-3

An objective test fr outliers has been described.\ If a set of ments, n, then the XH or XL is an outlier at thai level of signifi-

data is ordered from low to high: XL, X2 . . . XH, and the average cance.
and standard deviation are calculated, then suspected high or low Further information on statistical techniques is available else-

outliers can be tested by the following procedure. First, calculate where.2.3

the statistic T:
4. References

T = (XH -
T = Ci' -
:i)ls fr a high value, or
xJIs for a low value. I. BARNE1T,V. & T. LEWJS.1984. Outliers in Statistical Data. lohn Wiley
& SOng, New York, N.Y.
, 2. NATREu..A,M.G. 1963. Experimental Statistics. National Bur. Stan-
Second, compare the value of T with the value from Tlible dards Handbook 91, Washington, D.C.
1010:1 fr either a 5% or 1% level of significance. If thecalcu- 3. SNEDECOR, G.W. & W.G. COCHRAN. 1980. Statistical Methods. Iowa
lated T is larger than the table value fr the number of measure- State University Press,Ames.

1010 C. Glossary
1. Definition of Terms Level of quantitation (WQ)/minimum quantitation level

(MQL)--the constituent concentration thai produces a signal

The purpose of this glossary is to defme concepts, not regula- sufficiently greater than the blank thai it can be detected

tory terms; it is not intended to be all-inclusive. within specified levels by good laboratories during routine

Accuracy--combination of bias and precision of an analytical operating co~ditions. Typically it is the concentratio~ thai

procedure, which reflects the closeness of a measured value to produces a sIgnal lOs above the reagent wateT blank sIgnal.

a true value. Duplicate-usually the smallest number of replicates (two) but

Bias--consistent deviation of measured values from the true specifically herein refers to duplicate sampies, i.e., two sampies

value, caused by systematic errors in a procedure. laken at the same time from ODe location.

Calibration check standard-standard used to deterrnine the state Internal standard-a pure compound added to a sampie extract

of calibration of an instrument between periodic recalibrations. jus.t be~ore instrumental analysis to permit correction fOT inef-

Confidence coefficient-the probability, %, that a measurement ficIencIes.


result will lie within the confidence interval or between the Laboratory control standard-a standard, usually certified by an

confidence limits. outside agency, used to measure the bias in a procedure. For

Confidence interval-set of possible values within which the true certain constituents and matrices, use National Institute of Stan-

value wililie with a specified level of probability. dards and Techno~ogy (NIST) Standard Reference Materials

Confidence limit--one of the boundary values defining the con- when they are avallabie.

fidence interval. Precision-measure of the degree of agreement among replicate

Detection levels-Various levels in increasing order are: analyses of a sampie, usually expressed as the standard devi-

Instrumental detection level (IDL)--the constituent concentra- ation.


tion thai produces a signal greater than five limes the signal/ Quality assessment-procedure fr determining the quality of lab-

noise ratio of the instrument. This is similar, in many orat~ry measurements by use of data from internal and external

respects, to "criticallevel" and "criterion of detection." qUalIty control measures.


The latter level is stated as 1.645 limes the s of blank anal- Quality assurance-a definitive plan fOT laboratory operation thai

yses. specifies the measures used to produce data of known precision

Lower level of detection (LW)--the constituent concentration and bias.


in reagent wateT that produces a signal 2(1.645)s above the Quality control-set of measures within a sampie analysis meth-

mean of blank analyses. This sets both Type I and Type 11 odology to assure that the process is in control.

errors at 5%. Other names fr this level are "detection Random error-the deviation in any step in an analytical proce-

level" and "level of detection" (LOD). dure thai can be treated by standard statistical techniques.

Method detection level (MDL)--the constituent concentration Replicate-repeated operation occurring within an analytical pro-

thai, when processed through the complete method, pro- cedure. Two or more analyses fr the same constituent in an

duces a signal wirb a 99% probability that it is different extract of a single sampie constitute replicate extract analyses.

from the blank. For seven replicates of the sampie, the mean Surrogate standard-a pure compound added to a sampie in the

must be 3.14s above the blank where s is the standard de- laboratory just before processing so that the overall efficiency

viation of the seven replicates. Compute MDL from repli- of a method can be deterrnined.
cate measurements Olle to five times the actual MDL. The Type I error-also called alpha error, is the probability of decid-

MDL will be targeT than the LLD because of the few rep- ing a constituent is present when it actually is absent.

lications and the sampie processing steps and may vary with Type II error-also called beta error, is the probability of not

constituent and matrix. detecting a constituent when it actually is present.


1-4 INTRODUCTION (1000)

1020 QUALITY ASSURANCE

1020 A. Introduction

This sectionapplies primarily to chemical analyses.SeeSection laboratory,4 as weil as an indication that managementhas made
9020 fr quality assuranceand control fr microbiologiqal anal- a commitrnent to assurethat the quality systemsdefined in the
yses. QA manual are implemented and followed at all times.
Quality assurance(QA) is the definitive program fr laboratory In the QA manual,clearly specify and documentthe managerial
operation that specifies the measuresrequired to produce defen- responsibility, authority, quality goals, objectives, and commit-
sible data of known precision and accuracy.This program will be met to quality. Write the manual so that it is clearly understood
defmed in a documentedlaboratory quality system. and ensuresthat all laboratory personnel understandtheir roles
The laboratory quality system will consist of a QA manual, and responsibilities.
written procedures, work instructions, and records. The manual Implement and follow chain-of-custody proceduresto ensure
should include a quality policy that defines the statisticallevel of that chain of custody is maintainedand documentedfr eachsam-
confidence used to expressthe precision and bias of data, as weil pIe. Institute proceduresto permit tracing a sampie and its deriv-
as the method detection limits. Quality systems, which include atives through all steps flom collection through analysis to re-
QA policies and all quality control (QC) processes,rollSt be in porting final results to the laboratory's client and disposal of the
place to document and ensurethe quality of analytical data pro- sampie.Routinely practice adequateand complete documentation,
duced by the laboratory and to demonstratethe competenceof which is critical to assuredata defensibility and to meet laboratory
the laboratory. Quality systems are essential fr any laboratory accreditation/certificationrequirements,and ensurefull traceabil-
seekingaccreditationunder stateor federallaboratory certification ity fr all tests and sampies.
programs. Included in quality assurance are quality control Standard operating procedures(SOPs) describe the analytical
(Section 1020B) and quality assessment(Section 102OC). See methods to be used in the laboratory in sufficient detail that a
Section 1030 fr evaluation of data quality. competent analyst unfamiliar with the method can conduct a re-
.. liable review and/or obtain acceptable results. Include in SOPs,
1. Quallty Assurance Planmng where applicable, the following items2-5: tide of referenced, con-
. sensustest method; sampie matrix or matrices; method detection
Estab~ISha QA program and pr~pare a QA manual or pl.an. level (MDL); sco and a lication; summary of -SOP; defini-
Include m the QA manual and assocIateddocumentsthe followmg .. . ~
pp . .. .
. 1-4. h .th al . . qual.Ity po1ICY
. tions,mterferences, safetyconsIderations, wagtemanagement, ap-
Items . cover .s eet
. WI . t d 1.
al approv taff sIgnatures, tu
.b.l . . al Para s, equIpmen ,an supp leg; reagents and sn; ta dards sample
statement;
organlzatlOn structure;
s responsl I Itles; an yst . preservation, .. shlpment, ..
tr. . d -" . -" ed b th collection, and storage requlfements, spe-
ammg an pellormance requlfements; tests pellorm y e ... . .
.
1aboratory; procedures "lor handlmg d .. 1 cIfic quallty control practlces,frequency, acceptancecntena, and
an edrecelvmg samp es; . ed corrective . action. I.f acceptance . . are not met; c -
1 tr 1 d d tati. d requlf cntena al
samp e con 0 an ocumen on proc ures; proce ures .. ..'
. . . . lbratlon and standardlzation; detaIls on the actual test procedure,
fr achleVlng . traceabIlIty of measurements, major equIpment, m- mc1uding samp1e
. . al 1 . al.fi . d
strumentation, and reference measurement standards used; stan- .'.
Preparatlon. c cu ations. qu I lcations an
'.
P er-

dard operating procedures (SOPs) fr each analytical method; formance requlfements fr analysts (mcluding number and type
proceduresfr generation,approval, and control of policies and of analyses);data assessment/da.ta
~anagement; references;and
procedures; procedures fr procurement of reference materials any tables, ~~wcharts, ~d validation or method performance
and supplies; proceduresfr procurementof subcontractors'serv- data..~t a mImmum,valldatea n~w SO~~~foreuseby f~st de-
ices; internat quality control activities; proceduresfr calibration, tenru~l~g th~ MDL and perfonrung ~ I~Itial demonstration of
! verification, and maintenanceof instrumentation and equipment; capabIlIty usmg relevant regu~atory~uldellnes. .
: data-verificationpracticesincluding interlaboratorycomparison Use an~documentp:eventlvemaInten~ceproced.ures fo~ m-
: and proficiency-testingprograms;procedures to be followedfr strumentatlonand.equlpment:An effectlveprev~ntlvem~nt~-
i feedback and corrective action whenevertesting discrepanciesare nance pro?ram wIll r~uce mstrument ~alfunctions, maIntam
! detected; procedures fr exceptions that permit departure from ~ore consIstentcallbratlon, be cost~~ffectlve,a.ndreduce.down-
; documentedpolicies; proceduresfr system and performanceau- time. Include measurementtraceabillty to National inStitute.of
dits and reviews; proceduresfr assessingdata precision and ac- Standardsand Technology (NIST) StandardReferenceMatenals
!I, c., curacy and determining method detection limits; proceduresfr (SRMs) or commercially available reference materials certi~ed
i} data reduction, validation, and reporting; proceduresfr records ~acea~leto NIST SRMs in ~e qA manual or SOP to establlsh
i.: archiving; proceduresand systemsfr control of the testing en- mtegnty of the laboratory callbration and measurementprogram.
il vironment; and proceduresfr dealing with complaints from users Formulate document-control procedures, which are essential to
fi of the data. Also define and include the responsibility fr, and dat~defensibility, t~ co.ver.thecompleteprocessof ~~ument g~n-
I t: frequency of, managementreview and updatesto the QA manual eratlon, approval, distribution, storage,recall, archIvmg, and dIS-
t and associateddocuments. posal. Maintain logbooks fr each test or procedure performed
J,: On the title page,includeapprovalsignaturesand a statement with completedocumentation
on preparation
andanalysisof each
11; that the manual hag been reviewed and determined to be appro- sampie,including sampie identification, associatedstandardsand
r priate fr the score, volume, and range of testing activities at the QC sampies,method reference,date/time of preparation/analysis,
QUALITY ASSURANCE (1020)/Qualily Control 1-5

analyst, weights and volumes used, results obtained, and any oratory Accreditation Conference,2nd Interim Meeting, Bethesda,Md.
problems encountered. Keep logbooks that document maintenance [available online]. U.S. Environmental Protection Agency, Washing-
and calibration tor each instrument or piece of equipment. Cali- ton, D.C.
bration procedures, corrective actions, internat quality control ac- 4. lNTE~AnONAL OROANIZATION FORSTAND~RDlZAnoN. !99~. General
tivities, performance audits, and data assessments tor precision Req~Irementstor the. Competenceof Testing an.dCallbration Labo- -
d (b . ) d. d . S ti. 1020Ban.d C ratones, ISO/IEC Gulde 25-Draft Four. International Org. tor Stan-
an accuracy las are lscusse m ec ons dard. ti. G S ',1
D ed . al .d . d . . lza on, eneva, Wltzer.an. d
ata r ucnon, v I anon, an reporting are the final steps m 5. U.S. ENVIRONMENTAL PROTECllON AOENCY.1995. Guidance tor the
the data-generation process. The data obtained from an analytical Preparationof StandardOperating Procedures(SOPs) tor Quality-Re-
instrumentmustfirst be subjectedto the datareductionprocesses latedDocuments.
EPAQA/G-6,Washington,
D.C.
described in the applicable SOP before the final result cah be
obtained. Specify calculations and any correction factors, as weIl 3. Bibliography
as the steps to be followed in generating the sampie result, in the
QA manual or SOP. Also specify all of the data validation steps DELFINO,J.J. 1977. Quality assurancein water and wastewater analysis
to be followed before the final result is made available. Report laboratories. Water Sew. Works 124:79.
results in standard units of mass, volume, or concentration as lNHORN,S.L., ed. 1978. Quality AssurancePracticestor Health Labora-
specified in the method or SOP. Report results below the MDL tories. American Public Health Assoc., Washington, D.C.
in accordance with the procedure prescribed in the SOP. Ideally, STA~Y, T.W. & s..S. VE~. 1983. Interim .Guidelinesand Specifica-
include a statement of uncertainty with each result. See references tions tor Prep~ng QUallty Assw:anceProJectPlans: EPA-600/4-83-
and bibliography for other useful information and uidance on 004, U.S. Envlfonmental Protection Agency, Washmgton, D.C. .
. . ..g U.S. ENVIRONMENTAL PROTECllON AOENCY.1997. Manual tor the Certi-
a QA programanddevelopmgan effective
estabhshmg QA man- fi cati.on 0f Laborat ones
. Anal yzmg
. D nn. ki ng W ater. EPA - 815 - B - 97 -
ual. 001, U.S. Environmental Protection Agency, Washington, D.C.
lNTERNAnoNAL OROANIZATIONFR STANDARDIZAT1ON. 1990. General Re-
2. References quirements tor the Competence of Testing and Calibration Labora-
tories, ISO/IEC Guide 25. International Org. tor Standardization,
1. STANLEY, T.T. & S.S. VERNER. 1983. Interim Guidelines and Specifi- Geneva, Switzerland. .
cations tor Preparing Quality Assurance Project Plans. EPA-600/4-83- U.S. ENViRONMENTALPROTEcnONAOENCY. 1994. NationaiEnvironmental
004, U.S. Environmental Protection Agency, Washington, D.C. Laboratory Accreditation Conference (NELAC) Notice of Confer-
2. QUALrrY SYSTEMSCOMMITfEE, NAnoNAL ENVIRONMENTALLABORA- ence and Availability of Standards. Federal Register 59, No. 231.
TORYACCREDITATIONCONFERENCE. 1996. National Environmental Lab- U.S. ENVlRONMENTALPROTEcnONAOENCY. 1995. Good Automated Lab-
oratory Accreditation Conference, 2nd Annual Meeting, Washington, oratory Practices. U.S. Environmental Protection Agency, Research
D.C. [available online]. U.S. Environmental Protection Agency, Wash- Triangle Park, N.C.
ington, D.C. AMERICAN AsSOClAnON FORLABORATORYACCREDITAnoN. 1996. General
3. QUALrrY SySTEMS COMMlTJ'EE, NAnONAL ENVIRONMENTALLABORA- Requirements tor Accreditation. A2LA, American Ass~. Labora-
TORYACCREDITATlONCONFERENCE. 1997. National Environmental Lab- tory Accreditation, Gaithersburg, Md.

1020 B. Quality Control


Include in each analytical method or SOP the minimum re- pability (IDC) at least once, by each analyst, before analysis of
quired QC tor each analysis. A good quality control program any sampie, to demonstrate proficiency to perform the method
consists of at least the following elements, as applicable: initial and obtain acceptable results for each analyte. The mc also is
demonstration of capability, ongoing demonstration of capability, used to demonstrate that modifications to the method by the lab-
method detection limit determination, reagent blank (also referred oratory will produce results as precise and accurate as results
to as method blank), laboratory-fortified blank (also referred to produced by the reference method. As aminimum, include a re-
as blank spike), laboratory-fortified matrix (also referred to as agent blank and at least tour laboratory-fortified blanks (LFBs)
matrix spike), laboratory-fortified matrix duplicate (also referred at a concentration between 10 times the method detection level
to as matrix spike duplicate) or duplicate sampie, internat stan- (MDL) and the midpoint of the calibration curve or other level
dard, surrogate standard (for organic analysis) or tracer (for ra- as specified in the method. Run the IDC after analyzing all re-
diochemistry), calibration, control charts, and corrective action, quired calibration standards. Ensure that the reagent blank does
frequency of QC indicators, QC acceptance criteria, and defini- not contain any analyte of interest at a concentration greater than
tions of a batch. Sections 1010 and 1030 describe calculations for half the MQL or other level as specified in the method. See Sec-
evaluating data quality. tion 101OC, for definition of MQL. Ensure that precision and
accuracy (percent recovery) calculated for the LFBs are within
1. Initial Demonstration of Capability the acceptance criteria listed in the method of choice. If no ac-
ceptancecriteria are provided,use 80 to 120% recoveryand
~20% relative standard deviation (RSD), as a starting point. If
I The laboratoryshouldconductan initial demonstration
of ca- detailsof initial demonstration
of capabilityare not providedin
1-6 INTRODUCTION
(1000)

the method of choice, specify and reference the method or pro- results with considerationto reagentblank results are as follows:
i cedure used fr demonstratingcapability.
. If the reagent blank is less than the MDL and sampie results
2. Ongoing Demonstration of Capability are greater than the MQL, then no qualification is required.
. If the reagent blank is greater than the MDL hut less than the
The ongoing demonstration of capability sometimesreferred MQL and sampieresults are greaterthan the MQL, then qualify
to as a "laboratory control sampie or lab~ratory control stan- the results to indicate that analyte was detected in the reagent
dard," "quality control check sampie," or "laboratory-fortified blank.. .
blank," is used to ensure that the laboratory remains in control . If the reagentblank IS greaterthan the MQL, further corrective
during the period when sampiesare analyzed, and separ~teslab- action and qualification is required.
oratory performancefrom method performanceon the sampiema-
trix. See ~ 5 below fr further details on the laboratory-fortified 5. Laboratory-Fortified Blank
blank. Preferably obtain this sampie from an externat source(not
the same stock as the cali~ration st~~ds). Analyze QC check A laboratory-fortified blank is areagent wateTsampieto which
sampieson a quarterly basIs,at amInImum. a known concentrationof the analytesof interest has been added.
A LFB is used to evaluate laboratory performance and analyte
3. Method Detection Level Determination and Application recovery in a blank matrix. As a minimum, include OlleLFB with
each sampie set (batch) or on a 5% basis, whichever is more
Determine the method detection level (MDL) fr each analyte frequent. The definition of a batch is typically method-specific.
of interest and method to be used before data flom any sampies Processthe LFB through all of the sampie preparation and anal-
are reported, using the proceduredescribedin Section 103OC.As ysis steps. Use an added concentration of at least 10 times the
a starti?g ~int fr dete~ning the con~entrationt~ use in MDL MDL, the midpoint of the calibration curve, or other level as
~ete~ation, use an estimate of. fiv~ times the.estI~ated detec- specified in the method. Preparethe addition solution from a dif-
tion lImIt. Perform MDL determmatIonsas an Iterative process. ferent referencesourcethan that usedfr calibration. Evaluate the
If calculate.d.MDL is not withi,n a ~actorof 10of th~ value fr the LFB fr percent recovery of the added analytes. If LFB results
kno.wnaddition, repeatdete~na~ons at a more sultable concen- are out of control, take corrective action, including re-preparation
tratI~n. Conduct MDL deterlnlnations at least annu~y (or other and re-analysis of associatedsampiesif required. Use the results
specified frequency) fr ~ach analyte and.m~thod muse .at the obtained fr the LFB to evaluate batch performance, calculate
laboratory. Perform or ven~ M?L deterlnlnatI~nfr eachmstru- recovery limits, and plot control charts (see ~ 12 below). Refer
ment. Perform MDL deterlnlnatIonsover a penod of at least 3 d to the method of choice fr specific acceptancecriteria fr the
fr each part of the procedure. Calculate recoveries fr MDL LFB
sampies.Recoveriesshould be between 50 and 150% and %RSD .
values $ 20% or repeat the MDL determination.Maintain MDL
and IDC data and have them available fr inspection. 6. Laboratory-Fortified Matrix
Apply the MDL to reporting sampie results as follows:
A laboratory-fortified matrix (LFM) is an additional portion of
. Report results below the MDL as "not detected." a sampie to which known amountsof the analytes of interest are
. Report results between the MDL and MQL with qualification added before sampie preparation.The LFM is used to evaluate
fr quantitation. analyte recovery in a sampiematrix. As aminimum, include Olle
. Report results above the MQL with a value and its associated LFM with each sampie set (batch) or on a 5% basis, whichever
error. is more frequent. Add a concentration of at least 10 times the
MRL, the midpoint of the calibration curve, or other level as
4. Reagent Blank specified in the method to the selectedsample(s).Preferably use
the same concentration as fr the LFB to allow the analyst to
Areagent blank or methodblank consistsof reagentwater (See separatethe effect of matrix from labo~atoryperformance.Prepare
Section 1080) and all reagentsthat normally are in contact with the LFM from a reference source dIfferent from that used fr
a samp1eduring the entire ana1ytica1 procedure.The reagentblank calibration. Make the addition suchthat sampiebac~groundlevels
is used to determine the contribution of the reagentsand the pre- do not adversely affect the recovery (preferably adJustLFM con-
parative analytical stepsto error in the measurement.As a mini- centrations if the known sampie is above five times the back-
mum, include one reagentblank with each sampie set (batch) or ground level). For example,if the sampiecontains the analyte of
on a 5% basis, whichever is more frequent. Analyze a blank after interest, make the LFM sampie at a concentration equivalent to
the daily calibration standardand after higWy contaminatedsam- the concentrationfound in the known sampie.Evaluate the results
pIes if carryover is suspected.Evaluate reagent blank results fr obtained fr LFMs fr accuracyor percent recovery. If LFM re-
the presenceof contamination. If unacceptablecontamination is sults are out of control, take corrective action to rectify the effect
! presentin the reagentblank, identify and eliminate sourceof con- or use another method or the method of standardaddition. Refer
tamination. Typically, sampie results are suspectif analyte(s) in to the method of choice fr specific acceptancecriteria for LFMs
the reagent blank are greater than the MQL. Sampies analyzed unti1the laboratory developsstatistically valid, laboratory-specific
with an associatedcontaminatedblank must be re-preparedand performance criteria. Base sampiebatch acceptanceon results of
re-analyzed.Refer to the method of choice fr specific acceptance LFB analysesfalber than LFMs alone, becausethe matrix of the
criteria fr the reagent blank. Guidelines fr qualifying sampie LFM sampie may interfere with the method performance.

"Jcc
QUAlITY ASSURANCE (1020)/Quality Control 1-7

7. Laboratory-Fortified Matrix Duplicate/Duplicate Sam pie reanalysis if required. Refer to the method of choice tor specific

surrogates or tracers and their acceptance criteria, until the labo-


A LfM duplicate is a second portion of the sampie described ratory develops statistically valid, laboratory-specific performance

in 11 6 aboveto which a knownamountof the analyteof interest criteria.


is added before sampie preparation. If sufficient sampie volume

is collected, this second portion of sampie is added and processed 10. Calibration

in the same way as the LfM. If sufficient sampie volume is not

collected to analyze a LfM duplicate, use an additional portion a. Instrument calibration: Perform instrument calibration, as

of an alternate sampie to obtain results tor a duplicate sampie to weIl as maintenance, according to instrument manual instructions.

gather data on precision. As aminimum, include one LFM du- Use instrument manufacturer's recommendations tor calibration.

plicate or one duplicate sampie with each sampie set (batch) or Perform instrument performance checks, such as those tor GC/

on a 5% basis, whichever is more frequent. Evaluate the results MS analyses, according to method or SOP instructions.

obtained tor LfM duplicates tor precision and accuracy (precision b. Initial calibration: Perform initial calibration with a mini-

alone tor duplicate sampies). If LfM duplicate results are out of mum of three concentrations of standards tor linear curves, a min-

control, take corrective action to rectify the effect or use another imum of five concentrations of standards tor nonlinear curves, or

method or the method of standard addition. If duplicate results as specified by the method of choice. Choose a lowest concen-

are out of control, reprepare and reanalyze the sampie and take tration at the reporting limit, and highest concentration at the up-

additional corrective action as needed (such as reanalysis of sam- per end of the calibration range. Ensure that the calibration range

pIe batch). Refer to the method of choice tor specific acceptance encompasses the analytical concentration values expected in the

criteria tor LfM duplicates or duplicate sampies until the labo- sampies or required dilutions. Choose calibration standard con-

ratory develops statistically valid, laboratory-specific performance centrations with no more than one order of magnitude between
criteria. If no limits are included in the method of choice, cal- concentrations.

culate preliminary limits from initial demonstration of capability. Use the following calibration functions as appropriate: response

Base sampie batch acceptance on results of LFB analyses rather factor tor internal standard calibration, calibration factor tor ex-

than LfM duplicates alone, because the matrix of the LfM sam- ternal standard calibration, or calibration curve. Calibration curves

pIe may interfere with the method performance. may be linear through the origin, linear not through the origin, or

nonlinear through or not through the origin. Some nonlinear func-


8. Internal Standard tions can be linearized through mathematical transformations,

e.g., log. The following acceptance criteria are recommended tor


Internal standards (IS) are used tor organic analyses by GC/ the various calibration functions.

MS, some GC analyses, and some metals analyses by ICP/MS. If response factors or calibration factors are used, the.calculated

An internat standard is an analyte included in eachstandard and %RSD tor each analyte of interest must be less than the method-

added to each samiJle or sampie extractidigestate just before sam- specified value. When using response factors (e.g., tor GC/MS

pIe analysis. Internat standards should mimic the analytes of in- analysis), evaluate the performance or sensitivity of the instru-

terest hut not interfere with the analysis. Choose an internal stan- ment tor the analyte of interest against minimum acceptance val-

dard having retention time or mass spectrum separate from the ues tor the response factors. Refer to the method of choice tor

analytes of interest and eluting in a representative area of the the calibration procedure and acceptance criteria on the response

chromatogram. Internat standards are used to monitor retention factors or calibration factors tor each analyte.

time, calculate relative response, and quantify the analytes of in- If linear regression is used, use the minimum correlation co-

terest in each sampie or sampie extractidigestate. When quanti- efficient specified in the method. If the minimum correlation co-

fying by the internal standard method, measure all analyte re- efficient is not specified, then a minimum value of 0.995 is rec-

sponses relative to this internat standard, unless interference is ommended. Compare each calibration point to the curve and

suspected.
If internalstandardresultsareout of control,takecor- recalculate.If anyrecalculated
valuesarenot within the method
rective action, including reanalysis if required. Refer to the acceptance criteria, identify the source of outlier(s) and correct

method of choice tor specific internat standards and their accep- before sampie quantitation. Alternately, a method's calibration

tance criteria. can be judged against a reference method by measuririg the

method's "calibration linearity" or %RSD among the "response


9. Surrogates and Tracers factors" at each calibration level or concentration.z

Use initial calibration, with any of the above functions (re-


Surrogates are used tor organic analyses; tracers are used tor sponse factor, calibration factor, or calibration curve), tor quan-

radiochemistry analyses. Surrogates and tracers are used to eval- titation of the analytes of interest in sampies. Use calibration ver-
uate method performance in each sampie. A surrogate standard is ification, described in the next section, only tor checks on the

a compoundof a known amountaddedto eachsampiebefore initial calibrationand not tor sampiequantitation,unlessother-


extraction. Surrogates mimic the analytes of interest and are com- wise specified by the method of choice. Perform initial calibration

pound(s) unlikely to be found in environmental sampies, such as when the instrument is set up and whenever the calibration ver-

tluorinated compounds or stahle, isotopically labeled analogs of ification criteria are not met.

the analytes of interest. Tracers are a different isotope of the an- c. Calibration verification: Calibration verification is the peri-

alyte or element of interest. Surrogates and tracers are introduced odic confirmation by analysis of a calibration standard that the

to sampies before extraction to monitor extraction efficiency and instrument performance hag not changed significantly from the

percent recovery in each sampie. If surrogate or tracer results are initial calibration. Base this verification on time (e.g., every 12 h)

out of control, take corrective action, including repreparation and or on the number of sampies analyzed (e.g., after every 10 sam-

I
QUALITY ASSURANCE (1020)/Quality Control 1-9

UCL

40
UCL
110 --- - - - - - - - - - - - - - - - - UWL

,
-J - .- - - - - - - - - - - - - - - - - - UWL
'-.
Q)
::0
0 100. . .. ..
~
E 35
. u...
& ... .. .. .'x
g
.-
.. .
. X
~...
-
1
+"'.
. 90
.-
C
Q)
g
0
()
30 .. .
------------------- LWL
-ro
~

~
Q)
g 80
LWL

LCL
0
()
LCL

25 70
Date Date

Figure 1020:1. Control charts for means.

curacy chart includes upper and lower warning levels (WL) and range so that the analyst need only subtractthe two results to plot
upper and lower controllevels (CL). Common practice is to use the value on the precision chart. The mean range is computed as:
:t 2s and :t 3s limits tor the WL and CL, respectively, where s
represents standard deviation. These values are derived from R= DoS

statedor measuredvalues tor referencematerials. The number of


measurements,n or n-l, usedto determinethe standarddeviation, the controllimit as
s, is specifiedrelativeto statisticalconfidencelimits of 95% tor
WLs and99%tor CLs. Set up an accuracychart by using either CL
--:t
=R 3s(R) = D.R
the calculated values tor mean and standarddeviation or the per-
cent recovery. Percentrecovery is necessaryif the concentration and the warning limit as
varies.Construct acharttor eachanalyticalmethod. Enterresults
on the chart each time the QC sampie is analyzed.Examples of
- -
WL = R :t 2s(R) = R :t 2(D4 R - R)
--
control charts tor accuracyare given in Figure 1020:1.
b. Precision (range) chart: The precision chart also is con- where:
structed from the averageand standarddeviation of a specified D2 = factor to convert s to the range (1.128 tor duplicates, as given
number of measurementsof the analyte of interest. If the standard in Table 1020:1),
deviation of the method is known, use the factors from s(R) = standarddeviationof therange,and
Table 1020:1to construct the centralline and warning and control D4 = factor to convert mean range to 3s(R) (3.267 for duplicates, as
limits as in Figure 1020:2. Perfect agreementbetween replicates givenin Table 1020:1).
or duplicates results in a difference of zero when the values are A precision chart is rather simple when duplicate analysesof
subtracted, so the baseline on the chart is zero. Therefore tor
precision charts, only upper warning limits and upper controllim-
its are meaningful. The standard deviation is converted to the
10
-J
";;;:0
TADLE 1020:1. FACfORS FOR COMPUTING LINES ON RANGE E CL
CONTROL CHARTS cii
-------------------
Q)
+"'
Number of Factor tor Factor for ro WL
<.>
Observations CentralLine ControlLimits '5. 5
n (Dv (D4) 6
"-
2 1.128 3.267 0
3 1.693 2.575 ~ ------------------- R
4 2.059 2.282 ~
5 2.326 2.115 ~
6 2534 2.004 0
Source:ROSENSTEIN,M. & A. S. GoLDEN.
1964.StatisticalTechniques
fr Quality Date
Control of Environmental
Radioassays.
AQCS Rep.Stat-l. PublicHealthSery.,
Winchester,
Mass. Figure1020:2.Duplicateanalyses
of a standard.
1-10 INTRODUCTION
(1000)

a standard are used (Figure 1020:2). For duplicate analyses of


sampies,the plot will appeardifferent becauseof the variation in 0.6
sampie concentration. If a constantrelative standarddeviation in
the concentration range of interest is assumed,then R, D~ etc., Q)
may be computed as above tor several concentrations,a smooth ~
curve drawn through the points obtained,and an acceptablerange &
tor duplicatesdetermined.
Figure1020:3illustratessuchachart. '0 0.4 CL
Aseparate table, as suggestedbelow the figure, will be needed ~ ------------------- WL
to trackprecisionover time. ro
More commonly,the rangecanbe expressed
as a furtctionof ::>
the relative standard deviation (coefficient of variation). The ~ ------------------- R
rangecan be normalizedby dividing by the average.Determine ~ 0.2
the mean range tor the pairs analyzedby
-R = (IRJIn Z0

and the variance (squareof the standarddeviation) as 0


Dat e or SampIe Sequence
SR2 = (~R? - nR2)/(n - 1) Figure 1020:4. Rauge chart tor variable ranges.

Then draw lines on the chart at R + 2sRandR + 3sRand,


tor each duplicate analysis, calculate normalized range and euter limits (CL). Use the following guidelines, based on these statis-
the result on the chart. Figure 1020:4 is an example of such a tical parameters,which are illustrated in Figure 1020:5:
chart. Control 1imit-1f Olle measurementexceeds a CL, repeat the
c. Chart analyses: If the warning limits (WL) are at the 95% analysis immediately. If the.repeatmeasurementis within the CL,
confidence level, 1 out of 20 points, on the average,would exceed continue analyses;if it exceedsthe CL, discontinue analysesand
that limit, whereas only I out of 100 would exceed the control correct the problem.
Warning limit-1f two out of three successivepoints exceed a
WL, analyze another sampie.If the next point is within the WL,
100 continue analyses; if the next point exceeds the WL, evaluate
potential bias and correet the problem.
Standard deviation-1f tour out of five successivepoints ex-
ceed ls, or are in decreasingor increasingorder, analyze another
80 sampie. If the next point is less than Is, or changes the order,
-J continue analyses;otherwise,discontinueanalysesand correct the
~E problem.
Trending-1f sevensuccessive
sampiesareon the samesideof
g
+"'
60 the centralline, discontinue analysesand correct the problem.

-a.
ca
I, .-
U
aJ aJ
::3 aJ > > In
'"" 0 0
I
, L-' > .c .c aJ
, 0 In
40 .c '" '"

~~
'0- :i?:o

I 0
Q)
~ .-In
'" aJ
.-In>
aJ
.->In
'"
C
'"
I:: !!l:. ~ ~ ~
ca
CI: !!-R '"
c'" :.
()
() :.-
()
() .-
In
In

+-' In -' In aJ
> aJlX
()
20 ~d -:r:s; lO~ ()~

c. aJ -
OaJ
-
0(1)
:.
In .c0
aJ
a:5 (')5 -:r.-l 00'"

:!::!: :!:
f
tt
r
0
100 200 300 400 500 35.
1! ~

..
:..:
.... ..
!! I
...
I
.. I..
I CL

;r Concentration,
mgiL 25...: ...:: : : . :.. WL
t,
0'
>CL I IIII .I III 15
.
"".'
"'.".
. ..
. :.. :.:
~
...
.
.
.:.
. ..
:.
.'
,
<CL
<WL -. ""
. . .. . . ... . . ..,.
X
Date
X
. .. ..
Figure 1020:3. Rangechart tor variable concentrations. Figure 1020:5. Meanscontrol chart with out-of-control data (upper half).
QUALITYASSURANCE (1020)/Quality Control 1-11

The above considerations apply when the conditions are either TABLE1020:11.EXAMPLE
DATAQUALIFIERS
above or below the centralline, but not on both sides, e.g., tour
f fi 1 ed ' . Symbol Explanation
0 Ive va ues must exce elther + Is or -Is. Mter correctmg
the problem, reanalyze the sampies analyzed between the last in- B Analyte found in reagentblank. Indicates possible
control measurement and the out-of-control Olle. reagelit or background contarnination.
Another important function of the control chart is assessment E Reported value exceeded calibration range.
of improvements in method precision. In the accuracy and pre- J Reported value is .an e~ti~ate because conc.entration is
Cl.sl' on charts l.f meas t I eed th WL less than reportmg hffilt or because certam QC
, uremen s never or rare y exc e , ' t .
al 1 t th WL d CL . th 0 cn ena were no t met.
rec
points. cu Trends
a e e in precision
an usmgbe detected
can e I to sooner
20 mostif runnirig
recenr data
av- N Organic constituents
is needed. tentatively identified . Confinnation

erages of 10 to 20 are kept. Trends indicate systematic error; PND Precision not detennined.
random error is revealed when measurements randomly exceed R Sampie results rejected because of grass deficiencies
warning or controllimits. in QC or method perfonnance. Re-sampling and/or re-
analysis is necessary.
13. QC Evaluationtor SmallSampie Sizes RND Recoverynot deterrnined,
U Compound was analyzed rar, hut not detected.

Small sampie sizes, such as for fjeld blanks and duplicate sam- * Based on V.S. Environmental Protection Agency guidelines.'

pIes, may not be suitable fr QC evaluation with control charts.


QC evaluation techniques fr small sampie sizes are discussed
elsewhere.3 . If a LFB f al.1s, rean al yze ano th er 1aboratory-fortI .file d bla k .

14. Corrective Action . If a second LFB fails, check an independent reference material.
If the second source is acceptable, reprepare and reanalyze af-

Quality control
a trendareevidenceof unacceptable
data outside
erroi:in the analyticalproc-
the acceptance limits or exhibiting
. If a LFM falls, checkLFB. If the LFB IS acceptable,quallfy
fected sampl~(s). . .

ess. Take corrective acti!l promptly to deterrnine and eliminate the data fr the LFM .s~ple or use another method or the
the source of the error. Do not report data until the cause of the method of standard add~tIon. .
problem is identified and either corrected or qualified. Example . If a LFM and the assoclated LFB fall, reprepare and reanalyze

dataqualifiersare listed in Table lO20:ll. Qualifyingdatadoes affectedsampies..


not eliminatethe needto takecorrectiveactions,but allows fr . If reagentblankfails, anal~zeanotherreagentblank.
the reporting of data of known quality when it is either not pos- . If second reagent blank falls, reprepare and reanalyze affected

sibleor practicalto reanalyzethe sample(s).


Maintainrecordsof sample(s). . ' .' .
al1out-of-controlevents,deterrninedcauses,
andcorrectiveaction. If the surrogateor l~temalstan~ardknown addItionfalls and
taken. The goal of corrective action is not only to elirninate such there are no calculation or reportmg errors; reprepare and rean-
events, but also to reduce repetition of the causes. alyze affected sample(s).

Corrective action begins with the analyst, who is responsible


tor knowing when the analytical process is out of control. The If data qualifiers are used to qualify sampies not meeting QC
analyst should initiate corrective action when a QC check exceeds requirements, the data mayor may not be usable fr the intended
the acceptance limits or exhibits trending and should report an p~rposes. It is the responsibility .of the la?rat?ry to p~vide the
out-of-control event to the supervisor. Such events include QC chent or end-user of the data Wlth sufficlent InformatIon to de-
outliers, hold-time failures, loss of sampie, equipment malfunc- termine the usability of qualified data.

tions, and evidence of sampie contarnination. Recommended cor-


rectiveactionto be usedwhenQC dataareunacceptable
are as 15. Reterences
foliows:

. Check data fr calculation or transcription error. Correct results


if error occurred.
I. U.S. ENVIRONMENTAL PRTECrION AGENCY. 1990. Quality
Quality Control Guidance for Removal Activities, Sampling QNQC
Plan and Data Validation Procedures. EPA-540/G-90/004,
Assurance/

U.S. En-
. Check to Sf;e if sample(s) was prepared and analyzed according vironmental Protection Agency, Washington, D.C. . ,
to the approvedmethodandSOP.If it wasnot prepareand/or 2. U.S.ENVIRONMENTAL
PROTECfION
AGENCY.1997. 304h Streamlmmg
analyze again. ' Proposal Rule, Federal Register, March 28, 1997 (15034).
. . .. 3. U.S. ENVIRONMENTAL PRTECrION AGENCY.1994. National Func-
. Check callbratIon . standards.. agamst an Inde pe ndent
, standard or. ti.ona I GUl.de Ifies
. "lor I norganlc. Data ReVlew.
. EPA - 5401R - 94-013 ,
refe~ence matenal. If cahbrati~n stan?ards fall, reprepare Call- U.S. Environmental Protection Agency, Contract Laboratory Pro-
bratIon standards and/or recallbrate Instrument and reanalyze gram, Office of Emergency and Remedial Response, Washington,
affected sample(s). D.C.
1-12 INTRODUCTION (1000)

1020 C. Quality Assessment

Quality assessment is the process used to ensure that quality TABLE 1020:ill. AUDrr OF A SOlL ANALYSIS PROCEDURE
control measures are being performed as required and to deter- Proced C t R ks .
mine the quality of the data produced by the laboratory. It m-
. ure ammen emar

cludes such items as proficiency sampIes, laboratory intercom- I. Sampie entered into yes lab number assigned
parison sampIes, and performance audits. These are applied to logbook
test the precision,accuracy,anddetectionlimits of the methods 2. Sampieweighed yes dry weight
in use, and to assess adherence to standard operating procedure 3. Orying procedure no rnaintenance of oven
requirements. followed not clone

1. LaboratoryCheckSampies (InternalProficiency) 4a. Balancecalibrated yes onceper year


b. Cleaned and zero yes weekly
The laboratory should perform self-evaluation of its proficiency adjusted
fr eachanalyteandmethodin useby periodicallyanalyzinglab- 5. Sampieground yes to pass50 mesh
oratory checksampIes.ChecksampIeswith known amountsof 6. Ballmill cleaned yes shouldbe after each
al f . I. d b .d .. sampie
the an ytes 0 mterest supp le y an outSl e organizatlon or
blind additions can be prepared independently within the labora-
tory to determine percent recovery of the analytes of interest by
. each method.
! In general, method performance will have been established be-
forehand; acceptable percent recovery consists of values that fall
within the established acceptance range. For example, if the ac- .
ceptable range of recovery fr a substance is 85 to 115%, then system. Quality systems audits should be conduct~ by a qualIfi~
the analyst is expected to achieve a recovery within that range on auditor(s) who is knowledgeable about the sectlo~ or analysIs
all laboratory check sampIes and to take corrective action if re- bein~ audited. Audit all major elements of the quahty system at
sults are outside of the acceptance range. least annually. Quality system audits may be conducted internally
or externally; both types should occur on a regular scheduled
I 2. LaboratoryIntercomparison
Sampies basisand shouldbe handledproperlyto protectconfidentiality.
! Internal audits are used fr self-evaluation and improvement. Ex-
: A good quality assessment
programrequiresparticipationin ternal auditsare usedfr accreditationas weIl as .education
on
'i periodic laboratory intercomparison studies. C.ommercial .and client requirements and fr approval of the end use of the data.
Sffle govemmental programs supply laboratory mtercompanson Corrective action should be taken on all audit findings and its
i sampIescontainingODeor multipleconstituents
in variousmatri- effectiveness
reviewedat or beforethe next scheduledaudit.
I ces. The frequency of participation in intercomparison studies
should be adjusted relative to the quality of results produced by ,

l the analysts. For routine procedures, semi-aQnual analyses are 5, Management Review

i' customary. If failuresoccur,takecorrective


actionandanalyze . .. . ted lab-
' laboratorycheck sampiesmore frequentlyuntil acceptable
per- RevIewandrevlslo.no~the ~Uallty.system,conduc .by
I " . h. d
I lormanceISac leve . orato ry management, IS vItal to Its mamtenance.
and effectlveness..
, Managementreviewshouldassess theeffectlvenessof theqUallty
I 3. Compliance Audits systemandcorrectiveactionimplementation,andshouldinclude
:i internal and external audit results, performance evaluation sampIe

I
i:
I :
Complianceauditsareconductedto evaluate
oratory meets the applicable requirements
whetherthe lab-
of the SOP or consen-
results,input from endusercomplaints,andcorrectiveactions.

i' sus method claimed as followed by the laboratory. Compliance 6 B'br h


I audits can be conducted by internal or.exte~al parties. A.checklist . I log rap y

j can be. used to d~ument the manne! m WhlCh a sampIe IS treated JARVIS, A.M. & L. Sru. 1981. Environmental Radioactivity Laboratory
t from ~me of re~el~t to final reportmg ~f ~e result. The goal of Intercomparison Studies Prograrn. EPA-600/4-81-004, U.S. Environ-
j comphance audIts IS to detect any devlatlons from the SOP or mental Protection Agency, Las Vegas, Nev.
t consensus
methodso thatcorrectiveactioncanbe takenon those INTERNAnONAL
ORGANlZAnON FOR STANDARDlZAnoN. 1990. General Re-
, : deviations. An
Table lO20:ill.
example format fr achecklist is shown in qu~ments fr the ~ompetence of ~esting and Calibration La~ra-
tones, ISO/IEC Gulde 25. International arg. fr StandardlZatlOn,
I ' Geneva,
Switzerland.
.. 4. Laboratory Quality Systems Audits AMERICAN Soc1ETYFR TESTINGAND MATERIALS. 1996. Standard Practice
..1. for Detennination of Precision and Bias of Applicable Test Methods
A quality systemsauditprogramis designedandconductedto of Committee
0-19onWater.ASTM02777-96,
Arnerican
Society
addressall programelementsandprovidea reviewof the quality Testing& Materials,
WestConshohocken,
Pa,
,~
DATA QUALITY (1030)/Measurement Uncertainty 1-13

1030 DAT A QUALITY

1030 A. Introduction

The role of the analyticallaboratory is to produce measure- its usable information decrease. Reporting tools, such as detection
ment-based information that is technically valid, legally d~fensi- or quantitation limits, frequently are used to establish a lower limit
ble, and of known quality. Quality assurance is aimed at opti- on usable information content.
mizing the reliability of the measurement process. All Laboratory data may be used fr such purposes as regulatory
measurements contain error, which may be systematic (with an monitoring, environmental decision-making, and process control.
unvarying magnitude) or random (with equal probabiJity of being The procedures used to extract information fr these different
positive or negative and varying in magnitude). Determination of purposes vary and may be diarnetrically opposed. For exarnple, a
the systematic and randOn1 error components of an analytical measurement fr regulatory monitoring may be appropriately
method uniquely defines the analytical performance of that qualified when below the detection level because the error bar is
method.1 Quality control (QC) procedures identify and control relatively large and may preclude a statistically sound decision.
these sources of error. Data collected over aperiod of time, however, may be treated by
statistical methods to provide a statistically sound decision even
1. Measures of Quality Control when many of the data are below detection levels}

Random error (precision) and systematic error (bias) are two 3. The Analyst's Responsibility
routine indicators of measurement quality used by analysts to as-
sess validity of the analytical process. Precision is the closeness The analyst must understand the measures of quality control
of agreement between repeated measurements. A measurement and how to apply them to the'data quality objectives of process
has acceptable precision if the random errors are low. Accuracy control, regulatory monitoring, and environmental field studies. It
is the closeness of a measurement to the true value. A measure- is important that the quality objectives fr the data be clearly
met is acceptably accurate when both thesystematic and random defined and detailed before sampie analysis so that the data will
errors are low. QC results outside the acceptancelimits, as set by be technically correct and legally defensible.
the data quality objectives, are evidence of an analytical process
that may be out of control due to determinant errors such as con- 4. Reference
tarninated reagents or degraded standards.
1. YOUDEN, W J. 1975. Statistical Manual of the Association of Official
2. Measurement Error and Data Use Analytical Chemists. Assoc. Official Analytical Chemists, Arlington,

Va.
Measurement error, whether random or systematic,reducesthe 2. OSBORN,K.E. 1995.You Can't Computewith LessThans.Wafer En-
usability of laboratory data. As a measuredvalue decreases,its vironmentLaboratorySolutions,WaterEnvironmentFederation,Al-
relative error (e.g., relative standard deviation) mayincrease and exandria, Va.

1030 B. Measurement Uncertainty


1. Introduction uncertainty, and other ways of expressing measurementuncer-
tainty.
Even with the fullest possible extent of correction, every meas-
urement hag error that is ultimately unknown and unknowable. 2. Error
The description of this unknown error is "measurement uncer-
tainty.', A measurement
canbe relatedto the unknowntrue value and
Reporting uncertainty along with a measurementresult is good unknown measurementerror as follows:
practice, and may spare the user flom making unwarranted or
risky decisionsbasedonly on the measurement. M = T+ E
Whereasmeasurementerror (E) is the actual, unknown devi-
ationof the measurement (M) from the unknowntrue value(1), This is a simpleadditiverelationship.
Thereareother plausible
measurementuncertainty (U) is the state of knowledge about this relationships betweenM and E, such as multiplicative or arbitrary
unknown deviation, and is orten expressedas U, as in M :t U. functional relationships,which are not discussedhefe.
U may be defined as an uncertainty expression.J,2This section BecauseE is unknown, M must be regarded as an uncertain
concems the definition of U, how to compute it, a recommen- measurement.In some practical situations, a value may be treated
I dation fr reporting uncertainty, the interpretation and scope of as known. 1'* may be, fr example, a published reference value,
1-14 INTRODUCTION
(1000)

a traceable value, or a consensusvalue. The purpose of the sub- assumedto be zero becauseany non-zero component is part of
stitutiqn may be tor convenience or because the measurement bias, by definition. The population standarddeviation, O"E,canbe
processthat produced 1"* hag less bias or variation than the Olle usedto characterizethe random componentof measurementerror
that produced M. For example, based on the average of many because the critical values of the normal distribution are weIl
measurements,avessei might be thought to contain 1"* ==50 jl,gJ knownandwidely available.For example,about95%of thenor-
L of galt in wateT.It then may be sampledand routinely measured, mal distribution lies within the interval jl, :t 20"E.Hence, if there
resulting in a reported concentration of M = 51 jl,g/L. The actual is no measurementbias, and measurementerrors are independent
concentration may be T = 49.9 jl,g/L, resulting in E = 51 - 49.9 and normally distributed, M :t 20"E (95% confidence, assumed
= 1.1 jl,g/L. normal) is a suitable war to report a measurementand its uncer-
To generalizethe natureof uncertainty, measurementerror may tainty. More generally, normal probability tables and statistical
be negligible or large in absoluteterms (i.e., in the original units) software give the proportion of the normal distribution and thus
or relative terms (i.e., unitless, E -+- T or 1"*). The perceived the % confidence gained that is contained within :t kO"Efr any
acceptability of the magnitudeof an absoluteerror dependson its value of scalar k.
intended use. For example, an absoluteerror of 1.1 jl,g/L may be Usually, however, the population standarddeviation, O"E,is not
inconsequentialfr an application where any concentration over known and must be estimated by the sampie standard deviation,
30 jl,g/L will be sufficient. However, if it is to be used instead as SE.This estimate of the standarddeviation is based on multiple
a standardfr precision measurement(e.g., of pharmaceuticalin- observationsand statistical estimation. In this case,the choice of
gredients), 1.1 jl,g/L too much could be unacceptable. the scalark must be basednot on the normal distribution function,
but on the Student's t distribution, taking into accountthe number
3. Uncertainty of degreesof freedom associatedwith SE.
Systematic error (B) is all error that is not random, and typi-
Reported measurement uncertainty will contain the actual cally is equatedwith bias. Systematicerror also can contain out-
measurementerror with astated level of confidence.For example, fight mistakes (blunders) and lack of control (drifts, fluctuations,
if M :t U is presentedas a 95% confidence interval, approxi- etc.).3 In this manual, the terms "systematic error" and "bias"
mately 95% of the time, the measurementerror E will fall within are used interchangeably.
the range of :t U. Systematic uncertainty oftenis more difficult to estimate and
make useful than is random uncertainty. Knowledge about bias is
4. Bias likely to be hard to obtain, and once obtained it is appropriately
and likely to be exploited to make the measurementless biased.
Bias is the systematiccomponent of error. It is defmed as the If measurementbias is known exactly (or nearly so), the user can
signed deviation betweenthe limiting averagemeasuredvalue and subtract it troll M to reduce total measurementerror.
the true value being measuredas the number of measurementsin If measurementbias is entirely unknown, and could take on
the averagetends to infinity and the uncertainty about the average any value from a wide but unknown distribution of plausible val-
tends to zero. For example, the reason the T = 49.9 jl,g/L galt ues, usersmayadopt a worst-caseapproachand report an extreme
solution is thought to be 1"* = 50 jl,g/L could be a bias, bound, or they may simply ignore the bias altogether. For ex-
B = 0.1 jl,g/L.The "leftover" error, 1.1 - 0.1 = 1.0 jl,g/L, is ample, historical data may indicate that significant interlaboratory
the random component.This random component(also called sto- biases are present, or that every time a measurementsystem is
chastic error) changeswith each measurement.The bias is fixed, cleaned, a shirt is observedin QC measurementsof standards.In
~d may be related to the laboratory method used to produce T.* the absenceof traceablestandards,it is hard fr laboratory man-
Usually, a recognized method will be used to produce or certify agementor analyststo do anything other than ignore the potential
the traceable standard,a sampie with a certificate statingthe ac- problem.
cepted true value T.* The method may be the best method avail- The recommendedpractice is to conduct routine QAfQC meas-
able or simply the most widely acceptedmethod. It is chosen to urementswith a suite of internat standards.Plot measurementson
have very low error, both bias and random. Such a traceablestan- control charts, and when an out-of-control condition is encoun-
dard may be purchased from a standardsorganization such as tered, recalibratethe systemwith traceablestandards.This permits
NIST. the laboratory to publish a boundary on bias, assliming that the
underlying behavior of the measurementsystemis somewhatpre-
5. Bias and Random Variation dictable and acceptably small in scale in between QAfQC sam-
pling (e.g., slow drifts and small shirts).
Measurementerror, E, (and measurementuncertainty) can be
split into two components,random and systematic: 6. Repeatability, Reproducibility, and Sources of Bias and
Variation
E=Z+B
a. Sources and measurement:The sourcesof bias and varia-
Random error, Z, is the component of the measurementerror bility in measurementsare many; they include sampling error,
thai changes from Olle measurementto the next, under certain sampie preparation,interferenceby matrix or other measurement
conditlons. Randorn rneasurernenterrors are assurnedto be in- quantities/qualities, calibration error variation, software errors,
dependentand have a distribution, orten assumedto be Gaussian counting statistics,deviations from method by analyst, instrument
(i.e., they are normally distributed). The normal distribution of Z differences (e.g., chamber volume, voltage level), environmental
is characterizedby the distribution mean, jl" and standarddevia- changes (temperature,humidity, ambient light, etc.), contamina-
tion, O"E.In discussion of measurementerror distribution, jl, is tion of sampieor equipment(e.g., carryover and ambi~ntcontam-

:
j
DATA QUALITY(1030)/Measurement
Uncertainty 1-15

ination), variations in purity of solvent, reagent,catalyst, etc., sta- results if they are homogeneous.Treat factors varied in the study
bility and age of sampIe, analyte, or matrix, and warm-np or as random factors and assumethem to be independentnormal
cool-down effects, or a tendencyto drift over time. random variableswith zero mean.However, this assumptionoften
The simplest strategy fOTestimating typicaI measurementbias can be challenged, becausethe sampIe and possibly the target
is to measure a traceable (known) standard, then compute the populations may be small (they may even be identical), and there
difference between the measuredvalue M and the known value may be a question of "representativeness."For example, six lab"'
T, assumedto be the true value being measuied. oratories (or analysts,or instruments)may report usablemeasure-
ments out of a total population of twenty capableof doing tandem
M - T=B+ Z ,mass spectrometryfOTa particular analyte and matrix. It is hard
, to know how representative
the six are of the twenty,especially
The uncertainty in the measurementof the traceable standard after a ranking and exclusion processthat can follow a study, and
is assumedto be small, although in practice there may be situa- whether th~ bias~sof the t~enty are normally dis~buted ~prob-
tions where this is not an appropriate assumption. If random ably not dlscernlble from SIXmeasurements,even lf the SIXare
measurementuncertainty is negligible (i.e., 2 ~ 0), the difference, representative).
M - T, will provide an estimate of bias (B). If random uncer- It may be more appropriateto treat eachfactor with few, known
tainty is not negligible, it can be observedand quantified by mak- factor values (i.e., choices such as laboratories) as fixed factors,
ing a measurementrepeatedly on the same test specimen (if the to use the statistical term. Fixed factorshave fIXed effects. That
measurementprocess is not destructive). This may be part of a is, each laboratory has a different bias, as might each analyst,
QA/QC procedure. each instrument, and each day, but these biasesare not assumed
b. Repeatability: As quantified by the repeatability standard to have a known (or knowable) distribution. Therefore, a small
deviation (URPT),repeatability is the minimal variability of a sampie cannot be usedto estimatedistribution parameters,partic-
measurementsystem obtained by repeatedlymeasuringthe same ularly a standarddeviation. For example, assumin.gthat v~abl~s
specimenwhile allowing no controllable sourcesof variability to are random, normal, and have zero mean may be mappropnatem
affect the measurement.Repeatability also can be obtained by an interlaboratory round-ro~in stud~. .lt ~ust be assumed~at
pooling sampIestandarddeviationsof measurementsof J different every.laboratory has some blas, but It IS.difficult to charactenze
specimens,as folIows: the blasesbecauseof laboratory anonymlty, the small number of
laboratories contributing usabledata, and other factors.

0" =
.g~J-~:;
.L
.!. 0" 2
Becauseof theseconcernsabout assumptionsand the potential
ambiguity of itswith
definition,do not and
report reproducibility ulilessofit
Rn J /= I RPT./ is accompanied study design a list of known sources

bias and variability and whether or not they were varied.


Repeatability also is called "intrinsic measurementvariabil-
ity," and is considered an approximate lower boundary to the 7. Gage Repeatability and Reproducibility, and the
measurementstandarddeviation that will be experiencedin prac- Measurement Capability Study
tice. The repeatability standard deviation sometimes is used to
compute uncertainty intervals, :!::U, that can be referred to as Combining the concepts of repeatability and reproducibility,
ultimate instrument variability, basedon the Student's t distribu- the Gage Repeatability and Reproducibility (Gage R&R) ap-
tion function (:!::U = :!::kSRPT). proach has been developed.4 It treats all factors as random (in-
Common sense and application experience demonstrate that re- cluding biases), and is based on the simplest nontrivial model:
peatability is an overly optimistic estimate to report as measure-
ment uncertainty fOTroutine measurement.In routine use, meas- Z = ZRPT
+ ZL
urements will be subject to many sourcesof bias and variability
that are intentionally eliminated or restrainedduring a repeatabil- h
ity study. In routine use, uncertainty in both bias (B) and varia- w ere:
bility (2) aregreater. ZRPT =normallydistributedrandornvariablewith rneanequalto zero
c. Reproducibility: As quantified by the reproducibility stan- andvarianceequalto URPT2, and
dard deviation (URPD),reproducibility is the variability of a meas- ZL = norm~llydistrib~tedrandornvariablew~thrneanequalto.zero
urement system obtained by repeatedlymeasuringa sampiewhile an~Wlththevananceof thefactor(e.g.,mterlaboratory)
blases,
allowing (or requiring) selected sources of bias or variability to O"L .
affect the measurement.With O"RPD, provide list of known appli- The overall measurementvariation then is quantified by
cable sourcesof bias and variability, and whether or not they were
varied.
B . . . I .. (. f
0"
E
= 0"
RPD
= YO"RPT-
10" 2 +
-r
0" 2
O"L-
arnng statIstlca vanatIon l.e., vanatIon m estImates0 var-
iability, such as the noisinessin sampie standarddeviations), the EstimatesfOTURPTand URPDusually are obtained by conduct-
reproducibility standard deviation always is greater than the re- ing a nested designed study and analyzing variance components
peatability standard deviation, becauseit has additional compo- of the results. This approach can be generalized to reflect good
Deuts.Typically, Olle or more of the following is varied in a re- practice in conducting experiments.The following measurement
producibility study: instrument, analyst, laboratory, or day. capability study (MCS) procedureis recommended.The objective
Preferably design a study tailored to the particular measurement of such studies is not necessarilyto quantify the contribution of
system (see I 030B.7). If the sampIeis varied, compute reproduc- every sourceof bias and variability, hut to study those considered
ibility standard deviations separatelyfr each sampie, then pool to be important, through systematicerror budgeting.
1-16 INTRODUCTION
(1000)

To perform a measurementcapability study to assessmeasure- erated (each set has ODerandom deviate fr each variable), and
met tlncertainty through systematicerror budgeting, proceed as the value of M is computed and archived. The archived distri-
folIows; bution is an empirical characterizationof the uncertainty in M.
ldentify sourcesof bias and variation that affect measurement e. Sensitivity study (designedexperiment): If the identities and
error. This can be dne with a cause-and-effectdiagram, perhaps distributions of sourcesof bias and variation are known and these
with source categoriesof: equipment, analyst, method (i.e., pro- sources are continuous factors, but the functional form of the
cedure and algorithm), material (i.e., aspectsof the test speci- relationship betweenthem and M is not known, an empirical sen-
mens), and environment: sitivity study (i.e., MCS) can be conductedto estimate the low-
Select sourcesto study, either empirically or theoretically,.Typ- order coefficients (SM/SG) fr any factor G. This will produce a
ically, study sourcesthat are influential, that can be varied during Taylor series approximation to the SM, which can be used to
the MCS, and that cannot be eliminated during routine measure- estimate the distribution of SM, as in 'J c above.
met. Select models fr the sources.Treat sourcesof bias as fixed f Random effectsstudy: This is the nestedMCS and variance
factors, and sourcesof variation as random factors. componentsanalysis describedin 'J 7 above.
Design and conduct the study, allowing (or requiring) the se- g. Passive empirical (QA/QC-type data): An even more em-
lected sources to contribute to measurementerror. Analyze the pirical and passiveapproachis to rely solelyon QA/QC or similar
data graphically and statistically (e.g., by regressionanalysis,AN- data. The estimated standarddeviation of sampIe measurements
OVA, or variance components analysis). ldentify and possibly taten on many different days, by different analysts,using differ-
eliminate outliers (observationswith responsesthat are rar out of et equipment, perhaps in different laboratories can provide a
line with the general pattern of the data), and leverage points useful indication of uncertainty.
(observationsthat exert high, perhapsundue, influence).
Refine the models, if necessary(e.g., basedon residual analy- 9. Statements of Uncertainty
sis), and draw inferences fr future measurements.For random
effects, this probably would be a confidence interval; fr fixed Always report measurementswith a statement of uncertainty
effects, a table of estimatedbiases. and the basis fr the statement.
Develop uncertaintystatementsas follows:4-6
8. Other Assessments of Measurement Uncertainty lnvolve experts in the measurementprinciples and use of the
measurementsystem,individuals familiar with sampling contexts,
In addition to the strictly empirical MCS approachto assessing and potential measurementusers to generatea cause-and-effect
measurementuncertainty, there are alternative procedures, dis- diagram fr measurementerror, with sourcesof bias and variation
cussedbelow in order of increasing empiricism. (' 'factors' ') identified and prioritized. Consult literature quanti-
a. Exact theoretical: Some measurement methods are tied fying bias and variation. If needed,conduct ODeor more meas-
closely to exact first-principles models of physics or chemistry. urement capability studies incorporating those sourcesthought to
For example, measurementsystemsthat count or track the posi- be most important. In some cases, Gage R&R studies may be
tion and velocity of atomic particles can have exact formulas fr sufficient. These studies will provide "snapshot" estimates of
measurementuncertainty basedon the known theoretical behavior bias and variation.
of the particles. Institute a QA/QC program in which traceableor internal stan-
b. Delta method (law of propagation of uncertainty): If the dards are measured routinely and the results are plotted on X and
measurementresult can be expressedas a function of input var- R control charts (or equivalent charts). React to out-of-control
iables with known error distributions, the distribution of the meas- signalson the control charts.In particular, re-calibrate using trace-
urement result sometimescan be computed exactly. able standardswhen the mean control chart shows a statistically
c. Linearized: The mathematicsof the delta method may be significant change.Use the control charts, relevant literature, and
difficult, so a linearized form of M = T + E may be usedinstead, the MCSs to develop uncertainty statementsthat involve both bias
involving a first-order Taylor seriesexpansionaboutkey variables and variation.
that influence E:
10. References
(M + 8M) = T + 8M/8G, + 8M/8G2+ 8M/8G3+ ...
1. YOUDEN,W.J. 1972. Enduring values. Technometrics 14:1.
fr sources GI, G2, G3, etc. of bias and variation that are contin- 2. HENRION, M. & B. FISCHHOFF. 1986. Assessinguncertainty in physical
uous variables (or can be represented by continuous variables). constant.Amer. J. Phys.54:791.
The distribution of this expression may be simpler to determine, 3. CURRIE,L. 1995. Nornenclature in evaluation of analytical rnethods
as it involves the linear combination of scalar multiples of the ' including detection and quantification capabilities. Pure Appl. Chem.

random variables. 67:1699.


d. Simulation: Another use of the delta method is to conduct 4. MANDEL, J. 1991.Evaluationand Controlof Measurernents.
Marcel
t . 1t. A . . th th d. .b ' f 5. Dekker,
NATIONALNew York, N.Y.
INSTITUTEOF STANDARDSAND TECHNOLOGY.1994T. echrnca
. I
compu er slmu a Ion... gatn assummg
. at e IStrl utlons 0
mea~urementerrors m m.put variables are k?own c;'r can be a~- noteTN 1297.NationalInst. Standards& Technology.
proxlmated, a computer (I.e., Monte Carlo) simulation can obtam 6. INTERNATIONAL STANDARDS ORGANIZATION.1993.Guide to the Ex-
empirically the distribution of measurementerrors in the result. pressionof Uncertaintyin Measurernent.
International
Standardsarg.,
Typically, Olle to tell thousandsets of random deviates are gen- Geneva,Switzerland.

!
DATA QUALITY (1030)/Method Detection Level 1-17

1030 C. Method Detection Level

1. Introduction MDL. From a table of the one-sidedt distribution select the value
of t for 7 - 1 = 6 degreesof freedomand at the 99% level;
Detection levels are controversial, principally becauseof in- this value is 3.14. The product 3.14 times s is the desired MDL.
adequatedefinition and confusion of terms. Frequently, the in- Although the LOQ is useful within a laboratory, the practical
strumental detection level is used for the method detection level quantitation limit (PQL) has been proposed as the lowest level
and vice versa. Whatever term is used, most analysts agree that achievableamong laboratorieswithin specified limits during rou-
the smallest amount that can be detected above the noise: in a tine laboratory operations.3The PQL is significant becausedif-
procedure and within astated confidence level is'the detection ferent laboratories will produce different MDLs even though us-
level. The confidence levels are set so that the probabilities of ing the same analytical procedures, instruments, and sampie
both Type I and Type n errors are acceptablysmall. matrices. The PQL is about five times the MDL and representsa
Current practice identifies severaldetectionlevels (see 101OC), practical and routinely achievabledetectionlevel with a relatively
each of which has a defined purpose. These are the instrument good certainty that any reported value is reliable.
detection level (illL), the lower level of detection (LLD), the
method detection level (MDL), and the level of quantitation 3. Description of Levels
(LOQ). Occasionally the instrument detection level is used as a
guide ~or dete~ining the MDL. The relationship among these Figure 1030:1 illustrates the detection levels discussedabove.
levels IS approxlmately illL:LLD:MDL:LOQ = 1:2:4:10. For this figure it is assumed that the signals from an analytical

instrument are distributed normally and can be representedby a


2. Determining Detection Levels normal (Gaussian)curve.4 The curve labeled B is representative
of the background or blank signal distribution. As shown, the
An operating analytical instrument usually produces a signal distribution of the blank signals is nearly as broad as for the other
(noise) even when no sampIeis presentor when a blank is being distributions, that is O"B= 0"/ -= O"L.As blank analysescontinue,
analyzed.Becauseany QA program requires frequent analysis of this curve will becomenarrower becauseof increaseddegreesof
blanks, the mean and standarddeviation becomewell known; the freedom.
blank signal becomesvery precise,i.e., the Gaussiancurve of the The curve labeled I representsthe illL. Its average value is
blank distribution becomesvery narrow. The illL is the constit- located kO"Bunits distant from the blank curve, and k represents
uent concentration that producesa signal greater than three stan- the value of t (from the one-sidedt distribution) that corresponds
dard deviations of the mean noise level or that can be determined to the confidence level chosen to describe instrument perform-
by injecting a standardto produce a signal that is five times the ance. For a 95% level and n = 14, k = 1.782and tor a 99%
signal-to-noise ratio. The illL is useful for estimating the con- limit, k = 2.68. The overlap of the B and I curves indicates the
stituent concentration or amount in an extract neededto produce probability of not detecting a constituentwhen it is present(Type
a signalto permitcalculatinganestimated
methoddetectionlevel. n error).
The LLD is the amount of constituent that produces a signal The curve at the extreme fight of Figure 1030:1 representsthe
sufficiently large that 99% of the trials with that amount will LLD. Becauseonly a finite number of determinationsis used for
produce a detectable signal. Determine the LLD by multiple in- calculating the illL and LLD, the curves are broader than the
jectionsof a standardat nearzero concentration
(concentration to choose0"/ =
blank hut are similar, so it is reasonable O"L.
no greater that five times the illL). Determine the standard de- Therefore, the LLD is kO"/+ kO"L= 2kO"Lfrom the blank curve.
viation by the usual method. To reducethe probability of a Type
I error (false detection) to 5%, multiply s by 1.645 from a cu-
mulative normal probability table. Also, to reduce the probability B I L
of a Type n error (false nondetection)to 5%, double this amount
to 3.290. As an example, if 20 determinationsof a low-level stan-
dard yielded a standarddeviation of 6 /J-glL,the LLD is 3.29 X -
6 = 20 /J-glL.I
The MDL differs from the LLD in that sampiescontaining the
constituent of interest are processedthrough the complete analyt-
ical method. The method detection level is greater than the LLD
becauseof extraction efficiency and extract concentrationfactors.
The MDL can be achieved by experienced analysts operating
well-calibrated instruments on a nonroutine basis. For example,
to determine the MDL, add a constituent to reagent water, or to
the matrix of interest, to make a concentrationnear the estimated
MDL} Prepareand analyze seven portions of this solution over
aperiod of at least 3 d to ensurethat MDL determination is more
representativethan measurementsperformed sequentially.Include
all sampie processing steps in the determination. Calculate the
standard deviation and compute the MDL. The replicate meas-
urementsshould be in the rangeof one to five times the calculated Figure1030:1.Detectionlevelrelationship.

""" "," ""c','c . C "' ,


1-18 INTRODUCTION (1000)

4. References 3. U.S. ENVIRONMENTAL PROTEcrION AGENCY.1985. National Primary


Drinking Water Standards:Synthetic Organics, Inorganics, and Bac-
1. AMERICAN SOCIETY FORTESTING ANDMATERIALS. 1983. StandardPrac- teriologicals. 40 CFR Part 141; Federal Register 50: No. 219, Novem-
tice fr Intralaboratory Quality Control Proceduresand a Discussion ber 13, 1985.
on Reporting
Testing Low-Level
& Materials, Data. Designation
Philadelphia, Pa. D4210-83, American Soc. 4. ity
OPPENHEIMER,
analysis. InJ.Proc.
& R.Water
TRuSSELL.1984.
Quality DetectionConference
Technology limits in wafer qual- .
(Denver,

2. GLASER,J.A., D.L. FOERST, J.D. McKEE,S.A. QUAVE& W.L. BUDDE. Colorado, December2-5, 1984).American Wafer Works Assoc., Den-
1981. Trace analysesforwastewaters. Environ. Sci. Technol. 15:1426. ver, Col0.

1030 D. Data Quality Objectives


1. Introduction cision statementmight be: Determine whether the mean level of
contaminant A in environmental medium B exceeds the regula-
Data quality objectives are systematicplanning tools basedon tory level C ~d requires remediation.
the scientific method. They are used to develop data collection A multi-tiered decision statementmight be: . . . if not, deter-
designs and to establish specific criteria tor the quality of data to mine whether the maximum level of contaminant A in environ-
be collected. The processhelps plannersidentify decision-making mental medium D exceeds the regulatory level E and requires
points tor data collection activities, to determine the decisions to remediation.
be made based on the data collected, and to identify the criteria c. ldenti.fying inputs: Identify the information neededto make
to be used for making each decision. This processdocumentsthe the necessary decision. Inputs may include measurements(in-
criteria tor defensible decision-making before an environmental cluding measurementsof physical and chemical characteristics),
data collection activity begins. data sources(historical), applicable action levels, or health effects
concerns.
2. Procedure Identify and list the sourcesof information: previous data, his-
torical records, regulatory guidance, professional judgment, sci-
The data quality objective process comprises the stages ex- entific)iterature, and new data.Evaluatequalitatively whether any
plained in this section. existing data are appropriatetor the study. Existing data will be
a. Stating the issue: Sometimesthe reasonfr perforrning anal- evaluated quantitatively later. Identify information needed to es-
yses is straightforward, e.g., to comply with a permit or other tablish the action level. Define the basis tor setting 'the action
regulatory requirement. However, at times the reasonis rar more levels: they may be basedon regulatory thresholds or standards
subjective; e.g., to gather data to support remedial decisions, or or may be derived from issue-specificconsiderations,such as risk
to track the changes in effluent quality resulting from process analysis. Deterrnine only the criteria that will be used to set the
changes.A clear statementof the reasonfor the analysesis in- numerical value. The actual numerical action level is deterrnined
tegral to establishing appropriate data quality objectives; this later.
should include a statementof how the data are to be u~d, e.g., Confirm that the appropriate measurementmethods exist to
to deterrnine permit compliance, to support decisions as to provide the necessarydata. Assure that there are analytical meth-
whether additional processchangeswill be necessary,etc. ods tor the parametersor contarninantsof interest, and that they
b. ldenti.fyingpossible decisionsand actions: Initially, express are appropriatetor the matrix to be sampled.Considerthe sampies
the principal study question.For example: Is the level of contam- to be collected and the analytical methods to deterrnine the po-
inant A in environmental medium B higher than regulatory level tential tor matrix interferencestor each method. Assure that the
C? This example is relatively straightforward. limits of the method (e.g., detection limit, quantitation limit, re-
Other questions may be more complex, tor example: How is porting limit) are appropriatefr the matrix (e.g., drinking water,
aquatic life affected by dischargesinto receiving waters by pub- wastewater, groundwater, leachate, soil, sediment, hazardous
licly owned treatment works (POTWs)? Break such a question waste) and the parameterto be measured.Ensurethat a laboratory
down into several questions that might then be used to develop is available to perform the analyses;determineits capacity, turn-
several decisions; organize these questionsin order of consensus arQundtime, data product, and cost. Include this information as
priority of all participating parties. input to the decision-makingprocess.
Identify alternative actions, including the no-action alternative, d. ldenti.fying study limits: Identify both the geographical area
that could result from the various possibleanswersto the principal and the time frame to which the decision will apply. Also define
study questions. the scale of decision-making. Identify the smallest, most appro-
In the first example above, if the level of contaminant in the priate subsetsof the total population tor which decisions will be
environmental medium is higher than the regulatory level, some made.Thesesubsetscould be basedon spatialor temporal bound-
cleanup or treatment action may be indicated. If it is lower, the aries. For example, while spatial boundariesof the issue may be
no-action alternative may be indicated, or the study team may a 300-acre site, sampies may be collected from, and decisions
wish to look at other environmental media and regulatory levels. made tor, each squareof a grid made up of 50-ft squaresdrawn
Finally, combine the principal study question with alternative on a sire map. Also, while temporalboundaries of the issue may
I actions into a decision statement.For the first example, the de- be identified (as thc duration of strm events), sampies may be

I .