Professional Documents
Culture Documents
validation
A calibration is a process that compares a known (the standard) against an unknown (the
customer's device). During the calibration process, the offset between these two devices is
quantified and the customer's device is adjusted back into tolerance (if possible). A true
calibration usually contains both "as found" and "as left" data.
calibration - a set of operations that establish, under specified conditions, the relationship
between values of quantities indicated by a measuring instrument or measuring system or values
represented by a material measure or a reference material, and the corresponding values
realized by the standard.
NOTES
1. The result of a calibration permits either the assignment of values or measurands to the
indications or the determination of corrections with respect to indications.
2. A calibration may also determine other metrological properties such as the effect of influence
quantities.
3. The result of a calibration may be recorded in a document sometimes called a calibration
certificate or a calibration report. (International Vocabulary of Basic and General Terms in
Metrology)
test - technical operation that consists of the determination of one or more characteristics of a
given product, process or service according to a specified procedure.
inspection - evaluation for conformity by measuring, observing, testing or gauging the relevant
characteristics, activity such as measuring, examining, testing or gauging one or more
characteristics of an entity and comparing the results with specified requirements in order to
establish whether conformity is achieved for each characteristic.
certification - procedure by which a third party gives written assurance that a product, process
or service conforms to specified requirements.
validation - confirmation by examination and provision of objective evidence that the particular
requirements for a specific intended use are fulfilled.
Instrument manufacturers usually supply specifications for their equipment that define its
accuracy, precision, resolution and sensitivity. Unfortunately, not all of these specifications are
uniform from one to another or expressed in the same terms. Moreover, even when they are
given, do you know how they apply to your system and to the variables you are measuring? Some
specifications are given as worst-case values, while others take into consideration your actual
measurements.
Precision describes the reproducibility of the measurement. For example, measure a steady
state signal many times. In this case if the values are close together then it has a high degree of
precision or repeatability. The values do not have to be the true values just grouped together.
Take the average of the measurements and the difference is between it and the true value is
accuracy.
1. It is the ratio between the maximum signal measured to the smallest part that can be
resolved - usually with an analog-to-digital (A/D) converter.
In order to determine the resolution of a system in terms of voltage, we have to make a few
calculations. First, assume a measurement system capable of making measurements across a
±10V range (20V span) using a 16-bits A/D converter. Next, determine the smallest possible
increment we can detect at 16 bits. That is, 2 16 = 65,536, or 1 part in 65,536, so 20V÷65536 =
305 microvolt (uV) per A/D count. Therefore, the smallest theoretical change we can detect is 305
uV.
Unfortunately, other factors enter the equation to diminish the theoretical number of bits that can
be used, such as noise. A data acquisition system specified to have a 16-bit resolution may also
contain 16 counts of noise. Considering this noise, the 16 counts equal 4 bits (2 4 = 16); therefore
the 16 bits of resolution specified for the measurement system is diminished by four bits, so the
A/D converter actually resolves only 12 bits, not 16 bits.
A technique called averaging can improve the resolution, but it sacrifices speed. Averaging reduces
the noise by the square root of the number of samples, therefore it requires multiple readings to
be added together and then divided by the total number of samples. For example, in a system with
three bits of noise, 23 = 8 , that is, eight counts of noise averaging 64 samples would reduce the
noise contribution to one count, √64 = 8: 8÷8 = 1. However, this technique cannot reduce the
affects of non-linearity, and the noise must have a Gaussian distribution.
Sensitivity is an absolute quantity, the smallest absolute amount of change that can be
detected by a measurement. Consider a measurement device that has a ±1.0 volt input range
and ±4 counts of noise, if the A/D converter resolution is 212 the peak-to-peak sensitivity will
be ±4 counts x (2 ÷ 4096) or ±1.9mV p-p. This will dictate how the sensor responds. For
example, take a sensor that is rated for 1000 units with an output voltage of 0-1 volts (V). This
means that at 1 volt the equivalent measurement is 1000 units or 1mV equals one unit. However
the sensitivity is 1.9mV p-p so it will take two units before the input detects a change.
Temperature = 25º C
Therefore a 200 mV reading could fall within a range of 199.631 mV to 200.369 mV.
Temperature = 25º C
Therefore, a 3.0V reading could fall within a range of 2.9982 mV to 3.0018 mV.
Summary Analysis:
Accuracy: Consider Condition No. 1. The total accuracy is 369 uV ÷ 2 V × 100 = 0.0184%
Accuracy: Consider Condition No. 2. The total accuracy is 1.786 mV ÷ 10 V × 100 = 0.0177%
Effective Resolution: The USB-1608G has a specification of 16 bits of theoretical resolution. However
the effective resolution is the ratio between the maximum signal being measured and the smallest
voltage that can be resolved, i.e. the sensitivity. For example...if we consider Condition No. 2, divide
the sensitivity value by the measured signal value or (138.5uV ÷ 3.0 V) = 46.5e-6 and then converting
to the equivalent bit value produces (1V ÷ 46.5e-6) = 21660 or 14.4 bits of effective resolution. To
further improve on the effective resolution, consider averaging the values as previously discussed.
Sensitivity: The most sensitive measurement is made on the ±1 volt range where the noise is only
41.5uV rms whereas the sensitivity of the 5 volt range is 138.8uV rms. In general, when selecting a
sensor, set the equipment to capture the highest output with the best sensitivity. For example, if the
output signal is 0-3 volts select the 5 volt range instead of the 10 volt.
Table 2.
Table 3.
http://www.youngusa.com/Manuals/
About Calibration
What is Calibration?
How is Calibration Performed?
Why is Calibration Important?
Where Are Calibrations Performed?
When Are Calibrations Performed?
Who Performs Calibrations?
What is Calibration?
Calibration is the act of comparing a device under test (DUT) of an unknown value with a reference
standard of a known value. A person typically performs a calibration to determine the error or verify
the accuracy of the DUT’s unknown value. As a basic example, you could perform a calibration by
measuring the temperature of a DUT thermometer in water at the known boiling point (212 degrees
Fahrenheit) to learn the error of the thermometer. Because visually determining the exact moment
that boiling point is achieved can be imprecise, you could achieve a more accurate result by placing a
calibrated reference thermometer, of a precise known value, into the water to verify the DUT
thermometer.
A logical next step that can occur in a calibration process may be to adjust, or true-up, the instrument
to reduce measurement error. Technically, adjustment is a separate step from calibration.
For a more formal definition of calibration, we turn to the BIPM (Bureau International des Poids et
Mesures or International Bureau of Weights and Measures, www.bipm.org), based in France, which is
the coordinator of the worldwide measurement system and is tasked with ensuring worldwide
unification of measurements.
BIPM produces a list of definitions for important technical terms commonly used in measurement and
calibration. This list, referred to as the VIM (International Vocabulary of Metrology), defines the
meaning of calibration as an “operation that, under specified conditions, in a first step, establishes a
relation between the quantity values with measurement uncertainties provided by measurement
standards and corresponding indications with associated measurement uncertainties and, in a second
step, uses this information to establish a relation for obtaining a measurement result from an
indication.” This definition builds upon the basic definition of calibration by clarifying that the
measurement standards used in calibration must be of known uncertainty (amount of possible error).
In other words, the known value must have a clearly understood uncertainty to help the instrument
owner or user determine if the measurement uncertainty is appropriate for the calibration.
International System of Units (SI) – The Top Level of Known Measurement Standards
How do we arrive at measurement standards of known values against which we calibrate our devices
under test? For the answer, we turn to the International System of Units, abbreviated “SI”, which is
derived from “Le Système International d'Unités” in French. The SI consists of seven base units which
are the second, meter, kilogram, ampere, kelvin, mole and candela.
The seven base SI units are mostly derived from quantities in nature that do not change, such as the
speed of light. The kilogram is the exception where it is still defined by a cylindrical, metallic alloy
artifact known as the International Prototype of the Kilogram (IPK) or “Le Grand K”. It is carefully and
securely stored under double glass enclosure in a vault in Paris. It is anticipated that the definition of a
kilogram will soon change from an artifact to be based on the Planck Constant.
It is interesting to note that the definition of some SI units has improved over time. For example,
consider the historical changes to the definition of one second:
Using the latest definition, the second is realized by the weighted average of atomic clocks all around
the world.
At this point, it should be pointed out that the base SI units can be combined per algebraic relations
to form derivative units of measure important in calibration such as pressure (Pounds per Square Inch
or PSI). In this case, pressure is derived from the meter and kilogram base SI units.
The SI was created by resolution of the General Conference on Weights and Measures (CGPM), an
intergovernmental organization created via the Treaty of the Meter, or Convention of the Metre,
signed in 1875 in Paris. The Treaty of the Meter also created organizations to help the CGPM, namely
the International Committee for Weights and Measures (CIPM), which discusses, examines and
compiles reports on the SI for review by the CGPM and the BIPM whose mission we mentioned
above. You can find and review CGPM resolutions at the BIPM website. Today member states of the
CGPM include all major industrialized countries.
Calibration Interoperability
A key benefit of having the BIPM manage the SI on a worldwide basis is calibration interoperability.
This means that all around the world, we are using the same measurement system and definitions.
This allows someone in the U.S. to purchase a 1-ohm resistor in Australia and be confident that it will
be 1 ohm as measured by U.S. Standards, and vice versa. In order to have interoperability, we need to
have all of our measurements traceable to the same definition.
Now that we have the SI reference standards, how do we efficiently and economically share them with
the world?
The Calibration
Traceability Pyramid
Think of the SI at the top of a calibration pyramid where the BIPM helps pass the SI down to all levels
of use within countries for the fostering of scientific discovery and innovation as well as industrial
manufacturing and international trade.
Just below the SI level, the BIPM works directly with the National Metrology Institutes (NMIs) of
member states or countries to facilitate the promotion of the SI within those countries.
The NMI of the United States is the National Institute of Standards and Technology (NIST), a non-
regulated federal agency of the United States Department of Commerce. Fluke Calibration works with
NMIs around the world where it does business. You can see a list of NMIs and other metrology
organizations by country at the BIPM's NMI page.
Because it’s not affordable, efficient or even possible for everybody within a country to work directly
with their NMI, NMI-level calibration standards are used to calibrate primary calibration standards or
instruments; primary standards are then used to calibrate secondary standards; secondary standards
are used to calibrate working standards; and working standards are used to calibrate process
instruments. In this way, references to the SI standards can be efficiently and affordably passed down
the calibration pyramid through the NMI, into industry as needed.
In some rare instances, an SI unit can be realized directly by a laboratory using a special instrument
that implements physics to achieve the measurement. The Quantum Hall Ohm is an example of this
type of device. While it is directly used in several calibration laboratories in the United States, the NMI
is still involved by helping ensure the device is measuring correctly.
Calibration Traceability
The lineage from the lowest level of the calibration pyramid all the way up to the SI standards can be
referred to as “traceability”, an important calibration concept. Stated another way, traceability, or
traceable calibration, means that the calibration was performed with calibrated reference standards
that are traceable through an unbroken chain back to the pertinent SI unit, through an NMI.
Calibration traceability may also be thought of as the pedigree of the calibration.
Traceability is also important in test and measurement because many technical and quality industry
standards require measurement devices to be traceable. For example, traceable measurements are
required in the medical device, pharmaceutical, aerospace, military, and defense industries as well as
in many other manufacturing industries. And, traceable calibration always helps to improve process
control and research by ensuring measurements and resulting data are correct.
As with other topics associated with calibration, many technical requirements have been developed for
managing and ensuring calibration traceability. To mention a few requirements, consideration must be
given to calibration uncertainty, calibration interval (when does the calibration expire?), and methods
used to ensure that traceability stays intact in the calibration program.
For more information regarding traceability and other important metrology concepts, visit Fluke
Calibration’s General Calibration & Metrology Topics technical library and metrology training pages.
Calibration Accreditation
When calibrations are performed, it’s important to be able to trust the process by which they are
performed. Calibration accreditation provides that trust. Accreditation gives an instrument owner
confidence that the calibration has been done correctly.
Calibration accreditation means that a calibration process has been reviewed and found to be
compliant with internationally accepted technical and quality metrology requirements. ISO/IEC 17025
is the international metrology quality standard to which calibration laboratories are accredited.
Accreditation services are provided by independent organizations that have been certified to do this
type of work. Every large country typically has at least one accreditation provider. For example, in the
United States, the National Voluntary Laboratory Accreditation Program (NVLAP), A2LA, and LAB are
accreditation providers. In England, the United Kingdom Accreditation Service (UKAS) is the
accreditation provider.
International agreements ensure that once a calibration process is accredited in one country, any
calibrations coming from that process can be accepted worldwide without any additional technical
acceptance requirements.
Calibration Certificates
A calibration laboratory often provides a certificate with the calibration of an instrument. The
calibration certificate provides important information to give the instrument’s owner confidence that
the device was calibrated correctly and to help show proof of the calibration.
A calibration certificate might include a statement of traceability or a list of the calibration standards
used for the calibration, any data resulting from the calibration, the calibration date, and possibly
pass or fail statements for each measurement result.
Calibration certificates vary because not all calibration laboratories follow the same industry
standards, and they also can vary depending on where the calibration fits within the calibration
pyramid or hierarchy. For example, the calibration certificate required for a grocery store scale may be
very simple, while the calibration certificate for a precision balance in a calibration laboratory may
have a lot more technical content. Calibration certificates coming from an accredited calibration
process have some very particular requirements which can be found in the international
standard ISO/IEC 17025.
To see a calibration certificate sample up close, learn more about its format and individual elements,
and read about Fluke Calibration’s process of standardizing certificates among its acquired brands,
see the application note, A New Format for Fluke Calibration Certificates of Calibration.
Calibration Uncertainty
From the definition of calibration previously discussed, you’ve probably noticed that “uncertainty”
(amount of possible error) rather than "accuracy" is used to describe the capability of the calibration
processes and outcomes. In the test and measurement industry, accuracy is often used to describe
the measurement capability of an instrument. Often the instrument manufacturer intends for an
accuracy specification to represent the expected range of error that may occur when using the
instrument. However, the VIM provides guidelines that "uncertainty" is the preferred term to use for
describing the measurement specification of an instrument. Since uncertainty is the chosen
vernacular to discuss amount of error and is such an important concept in the calibration discussion,
it deserves a bit more attention.
Let’s start with a basic definition. Uncertainty describes a range of values in which the true value can
be found. For example, if a voltmeter has a measurement uncertainty of ± 0.1 V, when measuring a
voltage value that appears on the display as 10.0 V, the true voltage value could be as low as 9.9 V or
as high as 10.1 V. If the 0.1 V uncertainty is stated to have 95 % coverage, we can have 95 %
confidence that 10V ± 0.1 V contains the true value. Fortunately, most results tend to be toward the
middle of the possible range, because random uncertainties tend to follow the Gaussian distribution
or normal bell curve.
With this understanding of uncertainty in mind, the calibration standard needs to be of sufficiently low
uncertainty that the user has confidence in the calibration result. The calibration standard should have
lower uncertainty (better accuracy) than the device being calibrated. For example, it does not make
sense to calibrate a precise micrometer using a measuring tape. Similarly, it does not make sense to
calibrate a high-precision precious metal scale by comparing it with a bathroom scale.
An important international metrology standard, the GUM (Guide to the expression of Uncertainty
Measurement), goes one step further with the importance of uncertainty with the following statement.
“When reporting the result of a measurement of a physical quantity, it is obligatory that some
quantitative indication of the quality of the result be given so that those who use it can assess
its reliability.”
Essentially, the GUM states that a measurement without a known uncertainty value will have an
unknown level of quality and reliability.
Uncertainties can enter the calibration equation from various sources. For example, let’s look at
calibrating a temperature probe. Uncertainties can be introduced by the reference thermometer and
the calibration system. Adding uncertainties of components together is referred to as an uncertainty
evaluation. Some evaluations use an estimate of uncertainty that can allow many different models of
temperature probes to be used, so long as they don’t exceed a budgeted value, and are therefore
called "uncertainty budgets".
Here is an example of why calibration uncertainty and measurement uncertainty are an important part
of our daily lives. A typical gasoline pump in the United States can pump gas with an uncertainty of
about ± 2 teaspoons (0.003 gallons) per gallon. Nobody is going to be upset if they are short a couple
of teaspoons of gasoline, and the gas station may not lose much money by giving away two teaspoons
of gasoline per gallon sold. However, if the uncertainty of the gasoline pump is ± 0.1 gallon, you can
imagine how inappropriate this level of uncertainty would be for this measurement. You could be
shorted a gallon of gas for every 10 gallons that you pump. So, the reason why uncertainty is so
important in calibration and measurement is because it is needed to allow the owner of the instrument
or the customer of the measurement to evaluate confidence in the instrument or the measurement. It
could be an interesting experiment to ask a gasoline station manager for an estimate of uncertainty
for one of their gasoline pumps!
For several years, the simple "4 to 1 TUR (Test Uncertainty Ratio)" has been implemented in many
calibration processes. It basically says that an appropriate uncertainty relationship between the
calibration standard and the DUT should be at 4 to 1, meaning the uncertainty of the reference
measurement is four times smaller than the uncertainty of the DUT.
Calibrators
Calibration Software
Using calibration software with the calibrator allows a user to completely automate the calibration and
calculate calibration uncertainty. Calibration software increases the efficiency of performing
calibrations while reducing procedural errors and reducing sources of uncertainty. There is
also calibration asset management software available that manages calibration equipment inventory.
Calibration Disciplines
There are many calibration disciplines, each having different types of calibrators and calibration
references. To get an idea of the types of calibrators and instruments that are available, see the wide
array of Fluke Calibration calibrators and other calibration equipment. Common calibration disciplines
include but are not limited to:
Electrical
Radio frequency (RF)
Temperature
Humidity
Pressure
Flow
Dimensional
Resources
Introduction on the ISO Guide to the Expression of Uncertainty in Measurement (GUM) - on-demand
webinar by Fluke Chief Metrologist, Jeff Gust
There are several ways to calibrate an instrument depending on the type of instrument and the chosen
calibration scheme. There are two general calibration schemes:
1. Calibration by comparison with a source of known value. An example of a source calibration scheme
is measuring an ohm meter using a calibrated reference standard resistor. The reference resistor
provides (sources) a known value of the ohm, the desired calibration parameter. A more sophisticated
calibration source like the resistor is a multi-function calibrator that can source known values of
resistance, voltage, current, and possibly other electrical parameters. A resistance calibration can also
be performed by measuring a resistor of unknown value (not calibrated) with both the DUT instrument
and a reference ohm meter. The two measurements are compared to determine the error of the DUT.
2. Calibration by comparison of the DUT measurement with the measurement from a calibrated reference
standard. A variant of the source-based calibration is calibrating the DUT against a source of known
natural value such as a chemical melt or freeze temperature of a material like pure water.
From this basic set of calibration schemes, the calibration options expand with each measurement
discipline.
Calibration Steps
A calibration process starts with the basic step of comparing a known with an unknown to determine
the error or value of the unknown quantity. However, in practice, a calibration process may consist of
"as found" verification, adjustment, and "as left" verification.
Many measurement devices are adjusted physically (turning an adjustment screw on a pressure
gauge), electrically (turning a potentiometer in a voltmeter), or through internal firmware settings in a
digital instrument. Non-adjustable instruments, sometimes referred to as “artifacts”, such
as temperature RTDs, resistors, and Zener diodes, are often calibrated by characterization. Calibration
by characterization usually involves some type of mathematical relationship that allows the user to use
the instrument to get calibrated values. The mathematical relationships vary from simple error offsets
calculated at different levels of the required measurement, like different temperature points for
a thermocouple thermometer, to a slope and intercept correction algorithm in a digital voltmeter, to
very complicated polynomials such as those used for characterizing reference standard radiation
thermometers.
The “as left” verification step is required any time an instrument is adjusted to ensure the adjustment
works correctly. Artifact instruments are measured “as-is” since they can’t be adjusted, so “as found”
and “as left” steps don’t apply.
Calibration Example
A Dry-well Calibrator (Fluke 9190A) with Reference and DUT
Thermometer Probes
Let’s say that you use a precise thermometer to control the temperature in your pharmaceutical plant
processes and you need to calibrate it regularly to ensure that your products are created within
specified temperature ranges. You could send your thermometer to a calibration lab or perform the
calibration yourself by purchasing a temperature calibrator, such as a liquid bath calibrator or dry-well
calibrator. A liquid-bath calibrator (like the Fluke Calibration models 6109A or 7109A portable
calibration baths) will have a temperature-controlled tank filled with a calibration fluid connected to a
calibrated temperature display. The dry-well calibrator is similar but a metal temperature-controlled
block will have measurement wells that are sized to fit the diameter of the DUT thermometer. The
calibrator has been calibrated to a known accuracy. You place your thermometer, the device under
test (DUT), in the calibrator tank or measurement well then you note the difference between the
calibrator display and the DUT over a distributed set of temperatures within the range for which your
thermometer is used. In this way, you verify if your thermometer is within specification or not. If the
thermometer needs to be adjusted, you may be able to adjust the display of the thermometer, if it has
one, or you can use the calibration results to determine new offsets or characterization values for the
probe. If you make adjustments, then the calibration process is repeated to ensure the adjustments
worked correctly and verify that the thermometer is within specification. You can also use the
calibrator to occasionally check the thermometer to make sure it's still in tolerance. This same
general process can be used for many different measurement devices like pressure gauges,
voltmeters, etc.
Resources:
How to Calibrate an RTD or Platinum Resistance Thermometer (PRT) - application note by Fluke
Calibration
Calibration helps keep your world up, running and safe. Though most never realize it, thousands of
calibrations are quietly conducted every day around the world for your benefit. When on your next
flight or taking medication or passing a nuclear facility, you can expect that the systems and
processes used to create and maintain them are calibrated regularly to prevent failure both in
production and in on-going use.
To further appreciate the role precise measurements and calibration play in your life, watch this three-
minute video by Fluke Chief Metrologist, Jeff Gust, made to help commemorate World Metrology Day,
which occurs on May 20 of each year. The video helps demonstrate how precise measurements
impact your daily life in transportation.
Play Video
Test and measurement devices must be calibrated regularly to ensure they continue to perform their
jobs properly.
Safety and compliance with industry standards, such as those enforced by the FDA in the United
States, are obvious reasons for keeping test and measurement tools calibrated. However, as
technology demands increase, and manufacturing costs go up, higher precision tests and
measurements are moving from the calibration laboratory and onto the factory floor.
Test and measurement devices that were manufactured within specifications can deteriorate over time
due to age, heat, weathering, corrosion, exposure to electronic surges, accidental damage, and
more. Even the best test and measurement instruments can possess manufacturing imperfections,
random noise, and long-term drift that can cause measurement errors. These errors, such as being
off a few millivolts or degrees, can be propagated to products or processes being tested, with the
potential to falsely reject a good unit or result or to falsely accept a bad unit or result.
Ensuring that test and measurement equipment is of sufficient accuracy to verify product or process
specifications is necessary to trust and build on the results of scientific experiments, ensure the
correct manufacture of goods or products, and conduct fair trade across country borders.
Calibrators Need to Be Calibrated Too
A calibrator can drift or wear from calibration tolerances over time and needs to be calibrated on a
regular basis. Usually following the minimum 4:1 ratio rule, calibrators are calibrated regularly by
more accurate reference standards. This process continues all the way up the calibration traceability
pyramid to the most accurate calibration standards maintained by a National Metrology Institute.
Calibration ROI
Periodic calibration is usually viewed as a smart business investment with a high return on investment
(ROI). Calibration eliminates waste in production, such as recalls required by producing things outside
of design tolerances. Calibration also helps identify and repair or replace manufacturing system
components before they fail, avoiding costly downtime in a factory. Calibration prevents both the hard
and soft costs of distributing faulty products to consumers. With calibration, costs go down while
safety and quality go up.
It’s important to point out that both the accuracy and cost of calibration normally declines as you
move down the calibration pyramid. Lower level accuracies may be needed on a manufacturing floor
as opposed to those in a primary lab. ROI is maximized by choosing calibration that matches the
accuracy needed.
1. Initial calibration: where zero air and calibration gas atmospheres are supplied
and any necessary adjustments are made to the analyser. Once this is done,
calibration gas concentrations are required at approximately 20, 40, 60 and 80
per cent of the full measurement range of the instrument, and the instrument
response is required to agree within 2 per cent of the calculated reference
value. Alternatively, when actual concentration is plotted against expected
concentration, the slope of the best-fit line should be within 1.00 ± 0.01, with a
correlation coefficient of at least 0.999. This is also referred to as a linearity or
multi-point check.
2. Operational precision checks: where the zero and span responses of the
instrument are checked for drift on a regular basis. The recommended
frequency is daily, but in any case it is recommended that precision checks be
undertaken at least weekly to adjust or correct for zero and span drift. The
drift tolerances given by the standards vary with each contaminant. In some
standards this is also called an operational recalibration.
3. Operational recalibration: where zero and span gases are supplied, as for an
initial calibration. It should be done when the analyser drift exceeds the
instrument performance requirements, or after six months since the last
calibration. Multi-point checks should be carried out every six months.
7.5 Training
Training of technicians carrying out calibration and maintenance work on air quality
monitoring instruments is vital, as most instruments are a sophisticated combination
of pneumatics, electronics, mechanical components and software. Training should
be an integral part of establishing and operating an air quality monitoring site.
Several types of training should be provided (considered as core competencies) to
technical staff, including:
an introduction to fundamental air pollution processes and air pollution
monitoring techniques (eg, Clean Air Society of Australia and New Zealand
courses)
specific training on instrumentation operation and maintenance (usually
through systems providers)
electronics and electrical systems
quality systems management and quality assurance in analytical techniques.
Appropriately trained staff could apply for International Accreditation New Zealand
(IANZ) or National Association of Testing Authorities (NATA) accreditation for a
monitoring method that meshes well with a quality management system. IANZ or
NATA accreditation recognises and facilitates competency in specific types of
testing, measurement, inspection or calibration.
Another effective method of training and systems improvement is to participate in
reciprocal auditing activities between monitoring agencies. The level of formality of
the arrangement is up to the agencies involved, but is likely to work well at any level.
The general approach is for technicians from one monitoring agency to visit and
audit the procedures and methods (such as calibration and maintenance activities)
used at another agency, in relation to both the auditee’s own systems and
documented procedures as well as against accepted industry practices and standard
methods. Both auditee and auditor learn through this exercise, with the ultimate aim
of continual improvement in monitoring systems and data quality.
Recommendation 20: Training
Air quality monitoring technical staff should be provided with basic training on core
air quality monitoring competencies.
Another effective method of training and systems improvement is to participate in
reciprocal auditing activities between monitoring agencies.