You are on page 1of 10

Static and Dynamic Characteristics

The Performance of an instrument is evaluated based on its static or


dynamic characteristics. Static characteristics refer to the case when the
different inputs to the system are either held constant or vary slowly with
respect to time. Whereas, dynamic characteristics refer to the performance of
the system when the inputs are varying rapidly with respect to time.
For example, the slow rise in temperature in the peak hour in the noon can
be considered as a static characteristic as the temperature attained is some what
constant and is subjected to a very slight change. Whereas, the pressure
conditions in an I.C engine which changes rapidly is considered to be a
dynamic characteristic.
There are many phenomena which can be conveniently described by static
response while there are some which can only be represented by dynamic
response. The overall performance of a system, many a times, can be evaluated
by a semi-qualitative superposition of static and dynamic characteristics. This
approach is in fact a convenient mathematical study with acceptable
approximation.
Static Calibration
Static Calibration refers to a situation in which all inputs (desired, interfering,
modifying) except one are kept at some constant values. Then the one input
under study is varied over some range of constant values, which causes the
output to vary over some range of constant values. The input output relation
developed in this way represents a static calibration valid under the stated
constant conditions of all the other inputs.
This procedure is repeated for every other input of interest (keeping rest of the
inputs constant). Thus the overall instrument behavior to all kinds of inputs
applied together can be approximated by means of superposition of the effects
of the individual inputs.
The statement that one input is varied and others are held constant implies
that all the inputs are determined independently of the instrument being
calibrated. For interfering or modifying inputs, the measurement of these inputs
usually need not be an extremely high accuracy level. For example, in a pressure
gauge temperature is an interfering input so that a temperature change of 100 deg
causes a pressure error of .100%. Now, if we have measured the 100 deg
interfering input with a thermometer which itself had an error of 2.0% the
pressure error actually would have been 0.102%. This error of 0.102% is

completely negligible in engineering conditions. But while calibrating the


instruments utmost care is to be taken. It is impossible to calibrate an instrument
to accuracy greater than that of the standard with which it is compared to.

By means of calibration
The instrument is checked against a known standard thereby helping in
evaluation of errors and accuracy
It involves a comparison of the particular instrument with either
i. Primary standard
ii. A Secondary standard with a higher accuracy than the instrument to be
calibrated or
iii. An instrument of known and higher accuracy

All working instruments in actual use must be calibrated against some


reference instruments which have higher accuracy. These reference
instruments in turn must be calibrated against higher standards of
secondary or tertiary levels. These standards are also calibrated against
primary standards
Primary standards are the state of the art, most accurate way known to
measure the quantity of interest. Such standards are developed, maintained
and improved by national laboratories such as NIST, USA (National Institute
of Standards and Technology) or NPL, New Delhi (National Physical
laboratory.
The following steps are necessary while calibrating an instrument:
1. Examine the construction of the instrument, and identify & list all the
possible inputs
2. Ascertain which of the inputs will be significant in the application for
which instrument is calibrated
3. Have apparatus that will allow to vary all the significant inputs over the
ranges considered necessary
4. Have standards to measure inputs
5. By holding some inputs constant, vary other input(s) and record the
output(s), develop the desired static input output relations.

Static Characteristics

True Value: - If the input value of calibration is known exactly then it can
be called the true value.

Accuracy: - Accuracy of a measuring system is its ability to indicate a true


value exactly.

Static error (e):- is defined as the difference between the true value applied
to a measuring system (input) and the measured value of the system
(output).
e = true value measured value
From which % accuracy can be found by relation.

By definition, accuracy can be determined only when true value is known


such as during a calibration.

Precision :- of a measurement system refers to the ability of the system to


indicate a particular value upon repeated but independent applications of a
specific value of input. A precise system has both repeatability and
reproducibility. However precision of a system may or may not guarantee
accuracy of the system.

Precision Error :- is a measure of random variation found during repeated


measurement

Bias Error: - is the difference between average value and the true value.
Both Precision error and bias error affect the acceptability of the system.

Linearity :
If an instruments calibration curve for desired input is not straight,
the instrument may still be accurate. However, in many applications, linear
behavior is desirable. The conversion from a scale reading to the
corresponding measured value of input quantity is most convenient if we
merely have to multiply by a fixed constant rather than consult a non-linear
calibration curve. When the instrument is part of a larger data or control
system, linear behavior of the parts often simplifies design and analysis.
Thus, linearity is simply a measure of maximum deviation of any
calibration points from this straight line. This maybe expressed as a
percentage of the full scale reading or a combination of the two.
If the relationship between the output and input can be expressed in
the following equation form

qo = a + k * qi
Where a and k are constants, the instrument is said to posses
linearity. In practice linearity can never be achieved. Deviations from ideal
linearity relations are termed as linearity tolerances.

Linearity can be specified as Independent linearity and the proportional


linearity. A 3% independent linearity means that the output will remain
within values set by two parallel lines spaced +3% of the full scale output
from the idealized line (fig a). 3% proportional linearity is illustrated in (fig
b). The ideal value is never more than +3% away from recorded value
regardless of the magnitude of the input.
Note: A non linear input output relation may also be approximated as linear over a
restricted range.
An instrument which does not posses linearity can still be highly accurate.
In some instruments which are inherently non linear in nature the
linearization can be achieved mechanically or electrically over a limited
range.

Span & Range : Instrument is operated from a minimum input value to a maximum input
value. This becomes the operating range of the measuring system. If

min

& xmax are the respective minimum & maximum input values

defining the input operating range extending from xmin & xmax. The
input span is expressed as

ri = x - x
min

Similarly, the output operating range is specified from max to min The
output span of the full scale operating range (FSO) is expressed as.

ro = y

max

max

min

In the proper procedure of calibration the inputs are applied within the
operating range. In practice during measurements it is important to
avoid extrapolation beyond the range of known calibration, since the
behavior of the system is uncharted in these regions.

Drift
Drift is a gradual shifting of calibration of instrument over a period
of time
Zero Drift: If the whole calibration (output value against input value)
gradually shifts by a certain quantity it is called a zero drift.
Span Drift: If there is proportional change in the indication all along
the upward scale, the drift is called the span drift.
Zonal Drift: In case the drift occurs only in a zone of the span, it is
called zonal drift. There are many environmental factors which cause
drift e.g. stray electric & magnetic fields, temperature, mechanical
vibrations, wear & tear etc.
e.g. consider a strain gauge, an interfering input to it is the
temperature. This causes the resistance of the gauge to vary and thus
would drift the output value even when the strain is zero. Temperature is
also a modifying input which changes sensitivity of strain gauge and
introduce a span drift as shown below.

Drift occurs in flow meters because of wear of orifices or venturies.


Drift may occur in thermocouples due to changes in their metals caused
by contamination or chemical reaction etc.

Reproducibility:Its the degree of closeness with which a given value of same measurand,
may be repeatedly measured under changed conditions of measurement
such as different observer, different method of measurement, different
location etc. Perfect reproducibility means that the instrument has no drift.

Repeatability
Its the degree of closeness of value of the results of successive
measurements of same measurand carried out under same conditions of
measurement over a certain period of time.
If an instrument is used many times and at different time intervals, the
output may not be the same but shows a scatter.
When this deviation from the ideal static characteristics is expressed in
absolute units or as a fraction of the full scale, It is called the repeatability
error.

Hysteresis
The sequential test is an effective diagnostic technique to identify and
quantify a hysteresis error in measurement system. Hysteresis error refers to
differences in the values found between going upscale and downscale in a
sequential test. It is often seen that the input-output graphs do not coincide
for continuously ascending and then descending values of the input. This
non-coincidence of input-output graphs for increasing and decreasing
inputs arises due to the phenomenon of hysteresis.
Some causes for hysteresis effect in an instrument are internal friction,
sliding or external friction, free play or looseness of mechanisms etc..
The effect of hysteresis on calibration curve is shown in the figure above.
For a particular input value, the hysteresis error is found from the difference
in the upscale and downscale output value.
eh =(y)upscale (y)downscale
Hysteresis is generally specified for a measurement system in terms of the
maximum hysteresis error found in the calibration, eh,max as a percentage
of full scale output range.
%eh ,max

eh ,max
ro

x100

Some hysteresis is normal for any system and affects the precision of the
system. Hysteresis effects are best eliminated by taking readings
corresponding to ascending and descending values of the input and then
taking the arithmetic average.

Threshold & Resolution


Threshold is the smallest measurable input while the resolution is the
smallest measurable change.
Consider an instrument to which an input is applied gradually. It is
observed that no output change is detected until certain minimum value of
input. This minimum value is taken as threshold input of the instrument.
Similarly if there is a certain minimum change in input to an instrument
there would be corresponding detectable change in output. This incremental
change in input is referred as resolution.
Threshold is measured when input is varied from zero while the resolution
is measured when the input is varied from any arbitrary non zero value.

Static Sensitivity:
When an input output calibration has to be performed, static sensitivity
of the instrument can be defined as the slope of the calibration curve. If the
curve is not normally a straight line, the sensitivity will vary with the input
value. To get a meaningful definition of sensitivity, the output quantity must
be taken as the actual physical output quantity.

If the input-output relation is linear, the sensitivity is constant for all values
of input.

The sensitivity of an instrument having a non linear static characteristics


depends on the value of input quantity and should be specified as

For Example, while measuring pressure we plot it against kPa. But the
actual physical output is an angular rotation of the pointer. The angular spacing
of the kilopascal marks on the pressure gauge. Suppose it is 5 angular deg/ kPa
and the slope of the graph is 1.08. We get a static sensitivity of (5) * (1.08) =
5.4 angular deg/kPa. In this form, the sensitivity allows comparison of this
pressure gauge with the others as regards its ability to detect pressure changes.
While the instruments sensitivity to its desired input is of primary concern,
its sensitivity to interfering or modifying inputs also may be of interest. For,
example, if we consider temperature as an input to the pressure gauge.
Temperature can cause expansion and contraction that will result in change in
output reading even though the pressure has not changed. Temperature can also
alter the modulus of elasticity of the gauge spring. In this sense its a modifying
input.

Accuracy
Accuracy indicates the closeness of the measured value with the actual or
the true value, and is expressed in the form of maximum error as a % of the full
scale reading. Thus, if the accuracy f a temperature indicator, with a full scale
range of 0-500 deg is specified as 0.5%, it indicates that the measured value
will be within 2.5deg of the true value, measured through a standard instrument
during the process of calibration. Thus, if it indicates a reading of 250deg, the
error will also be 2.5 deg, i.e 1% of reading.

Precision
Precision indicates the repeatability or reproducibility of an instrument. If
an instrument is used to measure the same input, but at different instants, spread
over the whole day, successive measurements may vary randomly. The random
fluctuations of readings, is often due to random variations of several other factors
which have been taken into account, while measuring the variable. A precision
instrument indicates that the successive reading would be very close, or in other
words, the standard deviation e of the set of measurements would be very small.

You might also like