You are on page 1of 10

KOCESS

MEASUREMENT

H("%ccurate
WHAT'S ACCURATE FOR ONE APPLICATION MAY BE A P P R O X I M A T E FOR ANOTHER. WHEN ASSESSING ACCURACY, THE FIRST STEP IS TO UNDERSTAND THE TERMS

AN ACCURATE GLOSSARY
Accuracy: The degree of conformity of an Indicated value to a recognized accepted standard value'. Accuracy for Instruments Is normally stated In terms of error ( .05% of upper range value [URV], 1% of span, 0.5% of readIng, 3/4 degree, etc.). Accuracy can also be stated In terms of bias and precision errors.34 Looking at Figure 1, the shift of the bullet holes from the bullseye Is the bias and the tightness of the bullet pattern Is the precision error. Bias may be known or unknown. An example of a known bias Is the deviation of a calibration standard from a National Institute of Standards and Technology (NISD/Natlonal Bureau of Standards (NBS) reference. Large known biases are normally calibrated out. Small known biases are normally compensated out. Examples of unknown biases Include human error, installation effects, environmental dis-

isAccurate? Parti
This article is the first in a three-part series. Here, Part I examines the concepts and terminology used to define accuracy of process measurements. In July, Port II will tell how to combine errors within an instrument or system to provide an estimate of the total error. In August, Part III will provide a method to determine how accurate an instrument or system must be to perform its specified function.

By William Mostia

turbances, etc.2 Precision errors are considered statistically random. They can be stated as the product of the measurement's standard deviation and Student T distribution, which will provide an error specification to the 95% confidence level. Also random are errors specified for transmitters, calculation devices, constant uncertainty, recorders, input/output devices, etc. Bias and precision errors can be individually combined using the root sum square (RSS) method, then the bias and precision errors can be combined:

William L Mostia, Jr., P.E., is a senior process control and computing engineer with Amoco Corp.'s Worldwide Engineering & Construction division in Houston.

he importance of accurate measurements is obvious to those who work in a plant where compensationpay and bonusesis tied to plant performance. The thickness of their pay envelopes is determined by how much product is produced, and what it took to produce it. Operators, engineers, and technicians in those plants have direct incentive to make sure dieir instruments are giving the most accurate readings possible. In other plants, accuracy may seem less important. Who cares if the readings aren't exact, as long as there's enough usable product to keep the boss off your back? If you (and your boss) are lucky, you do. These days, increased competition and government regulations have boosted the demands for improved operating efficiency, business unit accountability, cost leadership, and quality certifications. The accuracy of measurement and control systems is of greater concern. Extensive applications of computers, data collection facilities, and databases are relying on accurate measurements. Accurately measuring all the process material and energy flows helps eliminate waste, improve operating efficiencies, and reduce costs. Businesses are putting stricter accountabilities at lower levels in business units that require accurate internal accountability for the unit as well as for intercompany and intracompany custody transfer. ISO 9000 certification is of increasing importance in the pro-

$+;..
E = (B+P) where: E total probable error, B total probable bias errors, b - bias errors, P = total probable precision error, and p = precision errors. Absolute accuracy: How close a measurement is to the NIST/NBS standard (the "golden ruler"). The accuracy traceabllity pyramid is shown in Figured Conformity: The maximum deviation of a callbra-

jUME/1996

CONTROL

tlon curve (average of upscale and downscale readings) from a specified characteristic curve.' Conformity can be independent (best fit), zero, or terminal based. This spec is commonly used as a measure of how close an instrument converts a non-linear input signal to a linear output signal. This error specification is typically seen in temperature instruments. Deadband: The range through which an input signal may be varied upon reversal of direction without initiating an observable change in the output signal.1 Drift or stability error: The undesired change In output over a specified period of time for a constant input under specified reference operating conditions.1 Drift error is influenced by environmental exposure and can be significant (or greater than the reference error), but It can be controlled by periodic calibration. Dynamic error The error resulting from the difference between the reading of an instrument and the actual value during a change in the actual value. Instrument damping contributes to this error as does measurement and transport deadtime, and the significance depends on the process time constant Dynamic error must be considered when designing safety systemsif your instrumentation system cannot measure a developing hazardous condition In time for the safety system to react, the-process may not get to a safe state in a timely manner. EMLUFI errors: Errors due to electromagnetic or radio frequency interference. Rlter error Error caused by the Improper application of a filter on the signal. This error can also be caused by improper settings In exception reporting and compression algorithms. Hysteresis: The dependence of the output, for a given excursion of the input, upon the history of prior excursions and the direction of the current traverse.1 Influence errors: Errors due to operating conditions deviating from base or reference conditions. Typically specified as effects on the zero and span, these errors reflect the instru-

cess industries, and instrument accuracy is an important part of that. The government is putting more and more regula- BIAS tions on the process industries, many requiring more accurate measurements and data collection. So accuracy is important. But what is accuracy? The language of accuracy is not universal, and any discussion depends on a common understanding of terminology. (For definitions of commonly used terms, see sidebar.)

VS. PRECISION

Pmaiw

Absolute Accuracy or 'Repeatability?


By definition, all accuracy is relative: how accurate a measurement is compared to a standard. When discussing the error of an instrument or system, we need to determine what form of accuracy we need for a particular THE SHIFT OF THE BULLET HOLES FROM THE BULLSEYE IS THE function. Absolute accuracy refers to how BIAS ERROR AND THE TIGHTNESS OF THE BULLET PATTERN E THE close a measurement is in relation to a PRECISION. traceable standard (see the traceability pyramid in Figure 2). "Repeatability," on the other hand, refers to how accurately a measurement can be duplicated or repeated. (The term "repeatability" in this context is the common field usage and not the ISA definition of the term. The common field usage is essentially the same as the ISA term "reproducibility;" that is, the combination of linearity, repeatability, hysteresis, and drift.) If it is important that you make a measurement in reference to an absolute value, we are talking about absolute accuracy. When most people talk about accuracy, they are talking about absolute accuracy. In order to have absolute accuracy for your measurement, you must have traceability from your measuring device to the National Institute of Standards and Technology (NIST)/ National Bureau of Standards (NBS) reference standards (the "golden rulers"). The accuracy of your measurements is directly dependent on the accuracy of your calibrators, which is directly related to the care and feeding of your calibrators, the calibrator's calibration cycle, and the traceability of your calibrators. In a process plant where ambient and process conditions can vary substantially from the reference conditions, maintaining accuracy can be a daunting tasks. Calibration cycle and methods, instrument location, instrument selection, maintenance, recordkeeping, and training all become important issues in maintaining your instrument's accuracy. A formal calibration program is the only way to ensure the accuracy of your instruments. This is essential for achieving and maintaining an ISO 9000 certification. In days past, and probably today in some plants, it was not uncommon for an operator to control a flow to so many "roots" or some other variable to so many divisions. Here die concern is how repeatable the measurement isif we are controlling to seven roots today, we want seven roots tomorrow to be die same thing. We want the instrument to provide the same value each time for the same process and operating conditions. Many controllers diat have relatively crude setpoints, such as field pneumatic controllers and HVAC thermostats, specify "repeatability." The object is to maintain an acceptable setpoint, with little concern about the absolute value. More critical applications such as laboratories and research facilities often use cali(continued onpS2) JUNE/1996

CONTROL

(continued from p5 i ) bration curves. The accuracy of measurement or control is related to a particular reading, which is then translated to an absolute accuracy value using a calibration curve. For this type of measurement, we are again talking about repeatability. For devices where influence errors are minimized and the drift error is controlled by the calibration cycle, this method can reach a higher level of accuracy. Error Specifications Manufacturers specify error limits for an instrument. These are not the actual errors that a particular instru- TRAGEABILITY PYRAMID ment will have, but rather, the limits of the error that the instrument could have. On an individual basis, a given instrument may be able to be calibrated to a higher accuracy than its specification, and a group of the same instruments will fall within the error specification. If a manufacturer states an error specification, it is generally true to within the vendor's testing methodology. The user should question any manufacturer who does not give an error specification. You may find that there is a good reason the specification was left off. But manufacturers do not typically give just one error specification. Instead, they give multiple specifications and sometimes in different ways. This is because the error will typically vary when ambient and process conditions vary from the reference conditions where the instrument is calibrated, and the different error specifications allow the ABSOLUTE ACCURACY REFERS TO HOW CLOSE A MEASUREMENT is user to determine the probable error at TO A TRACEABLE STANDARD. SUCH .AS NIST/NBS STANDARDS. other conditions. So in reality, these error specifications define an error envelope. An example of the error vs. ambient temperature envelope for a generic transmitter is given in Figure 3. Not All Error Specifications Are Created Equal Errors are specified in a number of different ways. In order to compare and combine error specifications, they must all be of the same type. Some of the typical error specifications are:
0.2% of calibrated span including the combined effects of linearity, hysteresis, and repeatability; 10.2% of upper range limit (URL) 10.5% of span per 100'F change; 0.75% of reading; six months from calibration; 0.1% of calibrated span or upper range value (URV), whichever is greater;

merit's capacity to compensate for variations in operating conditions. Influence errors are often significant contributors to the overall error of an Instrument Inherent errors: Errors inherent in an instrument at reference conditions. These are due to the inherent mechanical and electrical design and manufacturing of the instrument linearity. The deviation of the calibration curve from a straight line. Linearity Is normally specified in relation to the location of the straight line in relation to the calibration points: Independent (best straight line fit), zero-based (straight line between the zero calibration point and the 100% point), or terminal-based '* (straight line between the zero and the .," 100% calibration points). Devices are ;.; normally calibrated at zero and at their upper range value (URV)the terminal points. It is, however, possible to calibrate a device at other points to get a better fit ISA Standard S51.1 provides a good description of linearity. Mounting position effect: The effect on the instrument's calibration due to mounting position. This Is typically a zero shift error that can be calibrated out after Installation.
Overrange influence error Error resulting from the overranging of the instrument after installation. This is normally a zero shift error. Power supply effect The effect on accuracy due to a shift in power supply voltage. This error could also apply to the air supply pressure for a pneumatic Instrument Reference accuracy: Accuracy, typically in percent of calibrated span, specified by the vendor at a reference temperature, barometric pressure, static pressure, etc. This accuracy specification may include the combined effects of linearity, hysteresis, and repeatability. Reference accuracy also may be stated in terms of mode of operation: analog, digital, or hybrid. Reference junction compensation accuracy: The accuracy of the cold junction compensation for thermocouple temperature transmitters.

1*F;
1/2 count; and 1 least significant digit (LSD).

Transmitter reference accuracy is typically rated in percent of span or URV while primary measuring elements, such as orifice plates, turbine meters, and thermocouples, are rated in percent of reading or actual measurement error. For a transducer that is connected to a thermocouple, the error specifications are not the samethe former is typically in percent of span or URV while the latter is in percent of reading. In order to combine these errors, they must be the same type. (continued on p54)
J U N E / I 996

CONTROL

(continued from pS2)

Digital device errors are usually resolution errors which are related to conversion errors, roundoff errors, and numerical precision. For example, a 1 2-bit resolution input device resolves the signal into 0-4,095 counts. It cannot resolve to less than 1 /2 count out of 4,095 counts. Roundoff error occurs when a digital device rounds off a partial bit value: 1/4 bit, 1/2. bit, etc. These errors can also be specified in terms of the least significant bit (LSB). For digital displays, errors are typically stated in terms of least significant digit or in per_____ cent of reading. Precision errors involve the kind of math that is being ERROR VS. AMBIENT TEMPERATURE done: single precision vs. douision, integer vs. floating point math, etc. Actual Errors The error stated in the error specifications represents the limits of the error, not necessarily the error that the device will exhibit in the field. Actual error can be determined by 10 20 30 40 50 60 70 80 90 100 110 120 130 testing the instrument in use ftf. and under process ambient conditions. I Testing a single instrument does not provide sufficient data to characterize a group of the instruments, but there are statistical methods that can be used when a small group of instruments are tested to estim a t e the a c c u r a c y of the instrument in general.' This can be particularly useful in THE ERROR DUE TO AMBIENT TEMPERATURE FOR A GENERIC TRANSMITTER IS A comparing manufacturers. COMBINATION OF THE REFERENCE ACCURACY AND THE TEMPERATURE ERROR. The actual errors determined DUE TO CANCELLATION, THE TOTAL PROBABLE ERROR IS LESS THAN THE WORST during calibration can be com'~ASE ERROR. pared to prior calibrations for the same instrument to measure the need to service or replace the instrument or to change the calibration cycle. This is becoming much more practical with the advent of calibration management systems that can easily retain the calibration history of an instrument. H

Resolution: The smallest interval that can be distinguished between two measurements. In digital systems, this is related to the number of bits of resolution of an analog signal, (e.g., the analog span divided by the count resolution1).

Repeatability: The degree of agreement of a number of consecutive measurements of the output for the same value of input under the same operating conditions, approaching from the same direction.1 Note that this specification is approaching from one direction and does not Include any effects of hysteresis or dead band. The repeatability error specification Is the largest error determined from both upward and downward traverses. The field or common usage of this term is closer to the term "reproducibillty" than to the ISA definition.

leproducibillty: The degree of agreement of repeated measurements of the output for the same value of input made under the same operating conditions over a period of time, approaching from both directions. Reproduclbility includes the effects of hysteresis, dead band, drift, and repeatability.1 This term Is an excellent specification, but for some reason, the vendors have chosen typically not to use It for modern instruments.

nil

Sampling error: The error caused by sampling a signal with too low a sampling frequency. In general, the sampling frequency should be at least twice the highest frequency in the signal being sampled.

Static pressure effect error For differential pressure transmitters, the percent change in zero and span due to a static pressure change from reference conditions. This error can be minimized by zeroing the transmitter under actual operating static pressure.

Temperature effect error: The percent change in

REFERENCES:
1. "Process Instrumentation Terminology," ISA-S51.11993, Instrument Society of America. 2. Measurement Uncertainty Handbook, Or. R.B. Abernethy et al. & J.W. Thompson, ISA 1980. 3. "Is That Measurement Valid?," Robert F. Hart & Marilyn Hart, Chemical Processing, October 1988. 4. "What Transducer Performance Specs Really Mean," Richard E. Tasker, Sensors, November 1988. 5. "Performance Testing and Analysis of Differential Pressure and Gauge Pressure Transmitters," Lyie E. Lofgren, ISA 1986. 6. "Calibration: Heartof Flowmeter Accuracy," Steve Hope, P.E., INTECH, April 1994.

zero and span for an ambient temperature change from reference conditions. This can be a significant contributor to the total error.

Vibration influence error: The error caused by exposing the instrument to vibrations, normally specified per g of acceleration and up to some frequency.

JUNE/ I 996

CONTROL

ASUREMENT
f i .

Ho,wAccurate

isAccurate? Part!
This article is the second in a three-part series. In June, Part I examined the concepts and terminology used to define accuracy of process measurements. Here, Part II tells how to combine errors within an instrument or system to provide an estimate of the total error. In August, Part III will provide a method to determine how accurate an instrument or system must be to perform its specified function.

THE ACCURACY OF A MEASUREMENT STARTS WITH THE INSTRUMENT ERROR SPECIFICATIONS. BUT IN ORDER TO COMPARE INSTRUMENTS OR SYSTEMS, THE ERRORS FROM DIFFERENT SOURCES MUST BE COMBINED TO CALCULATE THE OVERALL ACCURACY

By William Mostia

1L Mostia, *, P.L, Is a senior process control and computing engineer with Amoco Corp.'s Worldwide Engineering & Construction division In Houston.

ne of the less pleasant experiences for the unwary equipment specifier is finding out an expensive instrument doesn't give readings accurate enough to control the process. For the person whose signature is at the bottom of the requisition, the prospect of going to the boss and asking for a better replacement is, at best, embarrassing. In some cases, the inadequate instrument stays on the job, giving readings no one really believes. At worst, the existence of a problem is either undiscovered or denied, and the instrument becomes a source of trouble that can range from variations in product quality to unexplained equipment malfunctions and shutdowns. Instrument vendors are not a lot of help [see "Fight Against 'Specmanship' Should Follow Food Label Lead," CONTROLJune '94, p58]. They typically advertise their instruments in terms of ref- |JMH;HI erence accuracy. But when you look further, you find that the manufac- TWO-COMPONENT SYSTEM turer has provided a multitude of 2STF error specifications to cover the range 0of process and environmental conditions where the instrument might be TMpwaten truMitUr used. These errors can be significant and must be evaluated before the full story of the instrument's accuracy can be determined. THE ERRORS OF INDIVIDUAL COMPONENTS MUST BE COMBINED TO GET THE For example, a 0.05% reference accu- TOTAL ERROR OF THE SYSTEM. BUT SIMPLY ADDING THE ERRORS WILL RESULT IN (continued on p40) UNDERESTIMATING THE SYSTEM ACCURACY.
CONTROL

38

JULY/1996

(continued from p38)

racy over a given range might be qualified by an ambient tem- where: perature of 20 2C, at 50% relative humidity, and within 90 eM = Reference accuracy, days of calibration. To know the accuracy rating a year later at 28C = Combined zero and span shift due to temperature, and 90% RH, it is necessary to combine the errors from these dif= Zero shift due to static pressure, ferent conditions. = Span shift due to static pressure, and So the savvy specifier, given the error specifications or actual = Other errors as appropriate. errors for an instrument or system, needs to know how to combine them for an overall accuracy number. This number can then But since individual instrument accuracy specifications are be used to compare instruments, do proration calculations, deter- considered statistically random, they can be combined using mine sensitivity, analyze systems, and so on. a root sum square calculation. According to such a calculation, the total probable error (TPE) for this differential presIndividual Instalment Accuracy sure transmitter is: Calculating worst case error (WCE), where all the errors in an instrument or system are added up in the worst possible way, gives a large error number. Field tests on instrument sysV0.1%2 (0.3125% 0.125%)' + 0.3125%2 * 0.125%2 tems have shown that the actual errors are considerably less 0.56% than the worst case errors. It also has been determined that these errors are statistically random, which means they can We can see that in diis case, the total probable error is about 5 7% be combined using the root sum square method (RSS). 2 ' 3 of die worst case error. For example, for a generic differential transmitter:
Base Conditions: Temperature: 75"F
Static Pressure: 0-psid (pounds per square inch, differential) Delta P: 100-ln.WC (Inches of water column)

Instrument System Accuracy Calculations


It also is possible to calculate the total probable error for a system that contains several instruments. If no calculations are required, the RSS method can be used to combine the errors of the individual instruments. You must ensure that all the errors are specified in the same manner. For example, for the system shown in Figure 1 , the total probable system error (TPEsn ) is estimated by:
TPES,

Operating Conditions: Ambient temperature: 25-125F Static Pressure: 500-psi

Transmitter Specs: Calibrated Span = 100-in.WC Upper Range Limit (URL) = 250-in.WC Reference accuracy 0.1% of calibrated span (Including the effects of hysteresis, linearity, and repeatability) Temperature Zero/Span Shift Error = (0.25% of URL + 0.25% of calibrated span) per 100F. Static Pressure Zero Shift Error = 0.25% of URL per JOOO-PSI Static Pressure Span Shift Error = 0.25% of span per 1000-PSI

= J(0.5%' * (0.1% x 52IE. )' 500 F 0.502%

The worst case error for diis differential pressure transmitter is: WCE (ew + e,Mmn + e^ * e,^,...) = (0.1% *
(0.25% x

50 F
100'F

250ln.WC
100in.WC

Notice that since the temperature transmitter error is in percent of span (0-500F) and the display error is in percent of reading, the display accuracy must be converted to a percent of span error number before it can be used in the error calculation. If instrument errors must first be calculated, the total probable error calculation is more complex because input errors are now added, subtracted, multiplied, divided, square rooted, etc. The effects of the these mathematical operations must be taken into account in addition to any inherent error in the calculation device. The calculation error can be determined using a Taylor series expansion. For the general case:
D = f(A,B,C)

it can be shown2-3 that the calculation error can be estimated by:


= (0.1% + 0.3125% + 0.125% * 0.3125% + 0.125%) = 0.975%

^A 0 e j' * ( ^ e , ) 2 * ( ^ e c ) 2 3 * 2 ^D 3P
1

oA

OD

do

(continued on p42)
40 JULY/1996 CONTROL

(continued from p40)

to which we must add the effect of die inherent device error, e:

For example, for die equation:


KxAxB

the error is estimated by:

span) and die signals must be normalized (range from 0-1.0). Also note diat die error analysis must consider die operating ranges of all die input variables involved so diat die worst case variable values will be used hi die error calculationfor example, die larger die variables on die top of an equation and die smaller die variables on die bottom, die larger die calculated error. "The Application of Statistical Methods in Evaluating die Accuracy of Analog Instruments and Systems," by C.S. Zalkind and F.G. Shinskey1, is an excellent paper on this subject and provides error estimates for a number of calculation devices. Some of die more common ones are listed in Table I.

This assumes that there is not any uncertainty associated with the "K" factor. Calculations involving constants that have an uncertainty associated with diem, such as an orifice calculation, must consider die effect of the uncertainty in the calculation. In the Flow Measurement Engineering Handbook4, die errors of die orifice calculation are discussed at length. There also is a treatment of sensitivity calculations, which can be used to determine the effect of a given error or uncertainty in a calculation. In order to make die error calculation for a calculating device, die errors must be stated in die same manner (usually in percent of

EQUATION
8 = KxA C = K,A * K 2 B

ERROR

B=
= KAB
ec =
(KBe,) a

where e is the inherent device error. Conclusions The estimation of an instrument or system accuracy can sometimes appear to be a complicated issue. The estimate of the accuracy of an individual instrument is the combination of the reference accuracy and the influence errors using the root sum square method (RSS). Error for instruments that perform calculations can be estimated using Taylor series expansion and RSS methods. The error for an instrument system can then be estimated by combining the individual instrument errors using the RSS method. Using these techniques, you can, with some relatively simple calculations, estimate the accuracy envelope of any instrument or system. H

here.

REFERENCES:
1. "There's more to transmitter accuracy than the spec," William J. Demorest. Jr., Instrument & Control Systems, May 1983. 2. "The Application of Statistical Methods in Evaluating the Accuracy of Analog Instruments and Systems," C.S. Zalkind & F.G. Shinskey, The Foxboro Co. 3. Measurement Uncertainty Handbook, Dr. R.B. Abernethy, et at. & J.W. Thompson, ISA, 1980. 4. How Measurement Engineering Handbook, R.W. Miller, The Kingsport Press, 1983. 5. "Predicting Flow Rate System Accuracy," William S. Buzzard, Fischer &

http://www.controlmagazine.com

Porter Technical Information 10E-12, Jan 1980. 6. "What Do All Those Transducer Specs Really Mean?," Chuck Wright, Personal Engineering, February 1996. 7. "Performance Testing and Analysis of Differential Pressure and Gauge

(requires Netscape Navigator 2.x)


42 JULY/1996

Pressure Transmitters," Lyle E. Lofgren, ISA, 1986.

CONTROL

"^Accurate0 isAccurate? Part3


ACCURATE MEASUREMENTS ARE ESSENTIAL FOR OPTIMUM CONTROL, BUT EVERY INCREMENT OF ACCURACY COMES AT A COST. HERE'S A WAY TO DECIDE HOW ACCURATE AN INSTRUMENT TO SPECIFY

This article is the third in a three-part series. In June, Part I examined the concepts and terminology used to define accuracy of process measurements. In July, Part II told how to combine errors within an instrument or system to provide an estimate of the total error. Here, Part III provides a method to determine how accurate an instrument or system must be to perform its specified function. ow accurate must an instrument be? The question reeks of compromiseintuitively, we want the most accurate instrument we can get. But raising accuracy to the status of a hoh/ grail is the perogative of philosophers and perhaps research scientists, for whom the questions of cost and reliability are piffling trifles. Engineers have the responsibility of choosing die lowest-cost, easiest to maintain equipment that will get die job done. The basic question is how accurate an instrument or system has to be to avoid influencing the measurement or significantly degrading the system error specification. In Part II, we saw how to add errors from various sources to determine the system error. For any proposed instrument and system, the errors of the system components can be added using the root sum square method to give an estimate of the system accuracy. But how accurate does the system have to be? The accuracy of the measuring system should be higher than the required measurement accuracy. But how much higher? One way to understand the impact of instrument accuracy is to consider instrument calibration. Just as instruments should be more accurate than the required measurement accuracy, calibration equipment must be more accurate than the instruments to be calibrated. In industry, a common rule-of-thumb accuracy requirement for calibration is three to four times (3-4X) more accurate than the instrument performance specification. Other sources give other ratiosyou can find 2X' ', 5X\S, 3-1 OX 4 , 4-lOX ! , or other guidelines. API MPMS CHAPTER 2 1 - "Flow Measurement Using Electronic Metering Systems," for example, states that the calibration standard should be two times better than the measurement specification5. ISA Standard SS1.1, on the other hand, states that for greater than 1 OX, the error of the calibration device does not have to be considered; between 3X and 10X it must be considered; and if less than 3X, the calibration device should not be used. Where do these numbers come from? The root sum square (RSS) equation, which (continued on p68)
CONTROL

By William Mostia

William L Mtttia. Jr., P.L, Is a senior process control and computing engineer with Amoco Corp.'s Worldwide Engineering & Construction division In Houston.

AUGUST/1996

67

(continued from p67)

takes the square root of the sum of the squares of the errors, show; the reasoning:

measuring system:
TPE=

(-)' e 2

where: TPE = total probable error tt = the error contributed by the measuring instrument or system. e = the error contributed by the calibration device. . If we rewrite the calibration device error in terms of the

where: e = the error contributed by the measuring instru ment or system. X = the ratio of the error of the measuring instrument or systern to the calibration device error. Now let's define the ratio of the error effect, R to be:

TPE-e Substituting:

R. =

R, = xvx 2 * 1 + x!

For example, consider an instrument calibrated for 0-100-in.WC that has an accuracy of 0.1% of span. Using a calibrator widi accuracy 0.04% of reading:
TPE= yOU)1 * (0.04 x >

TPE =v0.0118

= 0.1077%

The error contribution of the calibrator is 0.0077%. The error effect ratio is: e 0.1 12.98
R. =
TPE-e 0.1077-0.1

&
R. = 2.5,/(2.5)2 1 * (2.5)2 =12.98

[ The R_ for some common values of X are shown in Table I :

. ,ris the complete solution'forcafibration and associated datarmaintgriaae Including 'portable and workshop/laboratory calibrators as well as DOS and Win-

Re 2.5 8.5
18.5

X 4 5 7 10

Re 32.5
50.5

98.5
200.5

of the calibration process including scheduling, data logging of calibration results, and generation of various calibration arid history certificates/reports. !_---- \

From this one can see that for an "X" value slightly greater than 2X (2.182, to be exact), the effect on the base accuracy is 10:1 or an order of magnitude. For 3 -4X the effect is about 18:1 to 32:1. and for about 7X (7.053) the effect is 100 or two orders of magnitude. Extending this to a more general accuracy CONTROL

FOR FREE INFO CIRCLE 107 ON CARD

M'

**w

requirement, die minimum "X" is a litde over 2, 3-4X should be adequate for most things, SX is appropriate for higher accuracy requirements, and 7-1 OX or more should be used for very high accuracy. However, die practical use of this information for process field instruments calibration may be somewhat limited. Modern field electronic process instrumentation reference accuracy is typically in die 0.1% range widi transmitters coming on die market in die 0.03-0.08% accuracy range. Many of die field calibrators on the market have accuracy ratings in die 0.02-0.08% of reading range and deadweight testers are in the range of 0.01 -0.05% of reading. From diis information, it is not hard to see diat getting past 3X for some instruments might be radier difficult and cosdy. In evaluating the selection of instruments for measurement and control functions based on accuracy considerations, we must determine what we consider to be a significant effect on the overall system. For example, if a control specification is 0.2F and we choose an instrument whose estimated error contribution is an order of magnitude less than the spec (R r = 2.2), then we would expect the probable error contribution by the instrument to be 0.02F. If diis is not considered a significant contribution, dien die instrument's accuracy is acceptable. We must also remember diat we are typically talking about systems that contain a number of components, each of which can contribute to die overall system error. Component errors must be ' added as described in Part II. The system accuracy must also be evaluated for die expected operating range as outlined in Part I. Then Table I can be used to help select instruments that will keep die system accuracy within die engineering requirements. 3

It's time you went to moss!

Need a good reason to go to mass? Now there's BASIS from Micro Motion. A new line of flowmeters that makes the benefits of Coriolis technology even more affordable. Need more good reasons? Fast, simple installation Reduced maintenance costs High accuracy and repeatability Broad range of applications Award-winning service and quality

ACCURATE REFERENCES:
1. API MPMS 21.1.8.6, "Calibration and Verification Equipment." American Petroleum Institute 2. "Standards, Simulators, and Calibrators," Measurement & Control, September 1992 3. "Achieving Provable Accuracy in Deadweight Pressure Testers," R.C. Evans, Measurement & Control, December 1986 4. "Process Instrumentation Terminology," ISA-S51.1,1993 5. "High Accuracy Differential Pressure Calibration," J. Anthony Comeaux, ISA 1986 6. "There's More to Transmitter Accuracy Than the Spec," William J. Demorest, Jr., Instrument & Control Systems, May 1983 7. "The Application of Statistical Methods in Evaluating the Accuracy of Analog Instruments and Systems," C.S. Zalkind & F.G. Shinskey, The Foxboro Co. 8. Measurement Uncertainty Handbook, Dr. R.B. Abernethy et at. & J.W. Thompson, ISA, 1980 9. "Predicting Flow Rate System Accuracy," William S. Buzzard, Fisher & Porter Technical Information 10E-12, January 1980 10. "What Transducer Performance Specs Really Mean," Richard E. Tasker, Sensors, November 1988 11. "Calibration: Heart of Flowmeter Accuracy," Steve Hope, P.E., InTech, April 1994
CONTROL

-\ FRO/V;

Time for more information? Call l-800-760-81ia \o Motion 5 Outside of the U.S.A. 303-530-8400.

FOR FREE INFO CIRCLE 133 ON CARD

You might also like