You are on page 1of 4

Terminology in Analytical Chemistry

In Analytical Chemistry, as well as in biosensors and immunoassays the terms


Accuracy, Precision, Selectivity, Limit of Detection, Sensitivity, Dynamic Response Range,
Stability, Response Time, Lifetime and Reliability are fundamental and defined as follows:

Accuracy: The truthfulness of the measurement is defined as the accuracy. Accuracy is


expressed by the error. High accuracy means, the measured value is close to the true
value and the error is small.

Precision: Precision describes the reproducibility of measurements. It describes the


closeness of data to other data that have been obtained in exactly the same way.
Error: The absolute error (E) is the difference of the mean measurement xi (from i
measurements) from the true value xt : E = xi - xt (absolute has a different meaning than
in mathematics. The absolute error has a sign!). The relative error (Er) describes the error
in relation to the true value in percent: Er = xi - xt / xt x 100%

Standard: A standard or better Standard Reference Material (SRM) can be used to


determine the true value. A Standard Reference Material has exactly known composition
(e.g., concentration of elements) and properties (e.g., particle size). SRMs can be
purchased from governmental and industrial sources. For example, from the National
Institute of Standards and Technology (NIST) in the US. A blank is a sample without the
species to be analyzed. Blank determinations are useful for detecting certain errors and
interferences.
Calibration: Calibration is the procedure of correlating the reading from an instrument with
the true value. For example: By using a SRM with the known amount of the analyte of 10
mg/ml and an instrument reading of 100 Units. The instrument can be calibrated to 100 Units =
10 mg/ml. More than one point is needed to calibrate an instrument.
Selectivity: The selectivity describes the accuracy of the result in terms of the interference by
other substances to the measurement. In detail, selectivity is the ratio of the sensitivity for an
interfering substance to the sensitivity measured for the analyte (see for sensitivity below). E.g.
if the sensitivity for an interfering substance is 1 Unit per mol and for the analyte 1000 Unit
per mole, the selectivity is 10-3, a high selectivity (small value) is wanted.
Limit of Detection (LOD): The LOD is defined as the concentration of the analyte at which
the presence of the analyte is measurable with a probability of > 99%. This is the concentration
of the analyte at which the signal of the measurement is at least 3 times bigger than the noise
level. A quantitative measurement needs signals of 10 times of the LOD.

-1-

Sensitivity: The sensitivity is the slope of the calibration curve of the analyte and is expressed
in signal units per concentration units (e.g. nA mol-1).
Dynamic Response Range: The dynamic response range is the concentration range of
the calibration curve which is limited to its lower end by the LOD and to its upper
end by saturation effects of the sensor/detector system (chemically or electronically).
Stability: The stability describes the variation of the signal. The signal variation is caused by
chemical or electronic factors. Stability can change with changes in sensitivity and
often decrease to the end of the lifetime of the sensor or system. A constant change of the
stability in one direction is called drift.
Response Time: The response time is not defined uniformly by different authors. Mostly it is
defined as the time in which the signal reached 90%, 95% or 99% of the final signal. Most
sensors/detectors follow an exponential increase in signal and from the mathematical point
of view the signal is never been reached. The final signal is usually reached at the equilibrium
or steady state conditions. The response time for increasing analyte concentrations is in
most cases different (smaller) from that for decreasing analyte concentrations, mainly
caused by diffusion effects.
Lifetime: The term lifetime is not well defined. From the practical viewpoint, lifetime is the
time in which a system, sensor or detector is preserving its properties (e.g. sensitivity) until
a limited value is reached (e.g. the sensitivity is < 10 nA mol-1 or e.g. 90% of the initial
value). For biosensors, the limiting factor is the biocompound itself, due to its sensitivity to
environmental factors, e.g. temperature, pH, heavy metal ions or degradation by microbes
or proteases. The determination methods of biosensor lifetime in the literature are highly
inconsistent. Almost no direct comparison of the results of different authors is possible.
Reliability: Reliability is an "overall property" of a sensor or instrument, describing the
truthfulness of the result for a given analytical problem under consideration of
interfering substances, sample preparation and maintain work. The reliability of a sensor is
not a fixed value, it depends strongly on if the sensor is suitable for the application and
on the chosen sample pretreatment. Reliability is therefore a controversial discussion topic
under experts.
Signal and Noise
Every measurement is made up of two components. One component carries the useful
information and is called signal. The other component carries non-useful (random) information
and is called noise. The noise component is unwanted, because it degrades the accuracy and
precision of the wanted signal. Noise also lowers the signal that can be detected
(distinguished from the noise), therefore lowers the LOD.
Signal: Information carrying component of a measurement.
Noise: Unwanted component carrying no information of a measurement. The term noise is
derived from radio engineering. In radio engineering the presence of an unwanted signal
-2-

was observed as noise or static from the loudspeaker. Today, the term is applied throughout
science and engineering to describe any random fluctuation observed in a continuous or
repeated measurement. The effect of noise on a continuous current measurement is seen in
Figure (a).

In comparison, Figure (b) is the theoretical plot of the same measurement (current)
without noise. The difference of the two plots represents the noise of the measurement.
Signal-to-Noise ratio: For any measurement a high Signal-to-Noise ratio (S/N) is desired. The
signal-to-noise ratio is an important quality measure for a measurement. The knowledge
of the signal or the noise alone has not much value to estimate the accuracy and LOD
of a measurement. Usually, the noise level is independent from the magnitude of the
signal. The relative error on the measurement caused by the noise is small for large
S/N ration and becomes larger with decreasing magnitude of the signal. While further
decreasing the signal; finally the signal cannot be distingue from the noise.
Sources of Noise in Instrumental Analysis
Noise can have different sources, some of the most important are:
Chemical Noise: Arises from uncontrollable variables that affect the chemistry of the system
being analyzed. Such variables are: Temperature, moisture, pressure, chemical equilibrium
fluctuations. Changes of the chemical composition of the sample due to contaminations
and unwanted chemical reactions. Examples: Contamination of the sample surface due to
absorption of laboratory fumes in XPS.
Instrumental Noise: Arises from each component of an instrument e.g., sensor, transducer,
amplifier and signal processing elements. The noise finally observed can be a complex
composition of noise coming from each component. Examples: Fluctuation of the light
source intensity in microscopy or optical spectroscopy.
Environmental Noise: Is a disturbance arising from the surrounding of the instrument.
Conductors in an electrical instrument are potential antennas to pick up electromagnetic
radiation. Such electromagnetic radiation comes from: Radio transmitters, engine ignition
systems, other instruments in the laboratory, lightning and ionospheric disturbances. Another
source is noise from the power supply of the instrument or changes in power frequency
or magnitude. Noise can also arise from vibrations of the instrument, caused by building
-3-

vibrations or vibrations caused from other equipment e.g., stirrer. Examples: Building
vibration in AFM or light microscopy.
Sources of Errors in Instrumental Analysis
Errors can be divided into two groups: Systematic Errors and Random Errors.
Systematic Errors: Are Instrument Errors, Method Errors and Personal Errors.
Instrument Errors can arise from measuring devices (e.g., pipettes) used at a temperature
different to their calibration temperature. Or in electronic instruments due to changes in circuit
resistance due to dirty contacts. Most instrument errors can be eliminated by a calibration.
Method Errors arise by a not ideal behavior of the sample for the chosen method (e.g.,
a part of the sample evaporated before it can be measured). This error is usually difficult to
detect. Personal errors arise from personal judgment (e.g., reading of a pointer between two
scales) or differences in laboratory tasks (e.g., angle of a pipette and waiting time in pipetting).
Random Errors: Random Errors (or Statistical Errors) arise in every measurement and they
are usually caused by a sum of uncontrollable variables. Random errors are due to unknown
causes and occur even when all systematic errors have been eliminated. For example:
Suppose a voltage is being monitored by a voltmeter which is read at half-hour intervals. The
instrument is operated under ideal environmental conditions and has been accurately
calibrated before the measurement. It will be found that the readings vary slightly over the
period of observation. This variation cannot be corrected by any method of calibration. The
only way to offset these errors is by increasing the number of readings and statistical
methods to obtain the best approximation of the true value. In the ideal case a bellshaped curve called a Gaussian distribution will be obtained after a very large number
of measurements. In comparison, Systematic Errors cannot be reduced by increasing the
number of repeated measurements.

-4-

You might also like