You are on page 1of 12

The performance characteristics of an instrument are mainly divided into two categories:

i) Static characteristics
ii) Dynamic characteristics
Static characteristics:
The set of criteria defined for the instruments, which are used to measure the quantities which are slowly
varying with time or mostly constant, i.e., do not vary with time, is called static characteristics.
The various static characteristics are:
i) Accuracy
ii) Precision
iii) Sensitivity
iv) Linearity
v) Reproducibility
vi) Repeatability
vii) Resolution
viii) Threshold
ix) Drift
x) Stability
xi) Tolerance
xii) Range or span
Accuracy:
It is the degree of closeness with which the reading approaches the true value of the quantity to be measured.
The accuracy can be expressed in following ways:
a) Point accuracy:
Such accuracy is specified at only one particular point of scale.
It does not give any information about the accuracy at any other Point on the scale.
b) Accuracy as percentage of scale span:
When an instrument as uniform scale, its accuracy may be expressed in terms of scale range.
c) Accuracy as percentage of true value:
The best way to conceive the idea of accuracy is to specify it in
terms of the true value of the quantity being measured. Precision: It is the measure of reproducibility i.e., given
a fixed value of a quantity, precision is a measure of the degree of agreement within a group of measurements.
The precision is composed of two characteristics:
a) Conformity:
Consider a resistor having true value as 2385692 , which is being measured by an ohmmeter. But the reader
can read consistently, a value as 2.4 M due to the nonavailability of proper scale. The error created due to the

limitation of the scale reading is a precision error.


b) Number of significant figures:
The precision of the measurement is obtained from the number of significant figures, in which the reading is
expressed. The significant figures convey the actual information about the magnitude & the measurement
precision of the quantity. The precision can be mathematically expressed as:

Where, P = precision
Xn = Value of nth measurement
Xn = Average value the set of measurement values
Sensitivity:
The sensitivity denotes the smallest change in the measured variable to which the instrument responds. It is
defined as the ratio of the changes in the output of an instrument to a change in the value of the quantity to be
measured. Mathematically it is expressed as,

Thus, if the calibration curve is liner, as shown, the sensitivity of the instrument is the slope of the calibration
curve. If the calibration curve is not linear as shown, then the sensitivity varies with the input. Inverse sensitivity
or deflection factor is defined as the reciprocal of sensitivity. Inverse sensitivity or deflection factor = 1/
sensitivity

Reproducibility:
It is the degree of closeness with which a given value may be repeatedly measured. It is specified in terms of
scale readings over a given period of time.
Repeatability:
It is defined as the variation of scale reading & random in nature Drift:
Drift may be classified into three categories:
a) zero drift:
If the whole calibration gradually shifts due to slippage, permanent set, or due to undue warming up of
electronic tube circuits, zero drift sets in.

b) span drift or sensitivity drift


If there is proportional change in the indication all along the upward scale, the drifts is called span drift or
sensitivity drift.
c) Zonal drift:
In case the drift occurs only a portion of span of an instrument, it is called zonal drift.
Resolution:
If the input is slowly increased from some arbitrary input value, it will again be found that output does not
change at all until a certain increment is exceeded. This increment is called resolution.
Threshold:
If the instrument input is increased very gradually from zero there will be some minimum value below which no
output change can be detected. This minimum value defines the threshold of the instrument.
Stability:
It is the ability of an instrument to retain its performance throughout is
specified operating life.
Tolerance:
The maximum allowable error in the measurement is specified in terms of some value which is called tolerance.
Range or span:
The minimum & maximum values of a quantity for which an instrument is designed to measure is called its
range or span.

B) Dynamic characteristics:
The set of criteria defined for the instruments, which are changes rapidly with time, is called dynamic
characteristics.
The various static characteristics are:
i) Speed of response
ii) Measuring lag
iii) Fidelity
iv) Dynamic error

Speed of response:
It is defined as the rapidity with which a measurement system responds to changes in the measured quantity.
Measuring lag:
It is the retardation or delay in the response of a measurement system to changes in the measured quantity.
The measuring lags are of two types:
a) Retardation type:
In this case the response of the measurement system begins immediately after the change in measured
quantity has occurred.
b) Time delay lag:
In this case the response of the measurement system begins after a dead time after the application of the input.
Fidelity: It is defined as the degree to which a measurement system indicates changes in the measurand
quantity without dynamic error.
Dynamic error:
It is the difference between the true value of the quantity changing with time & the value indicated by the
measurement system if no static error is assumed. It is also called measurement error.

What is Multivariate Statistical Analysis?


Multivariate statistical analysis refers to multiple advanced techniques for examining
relationships among multiple variables at the same time. Researchers use multivariate
procedures in studies that involve more than one dependent variable (also known as the
outcome or phenomenon of interest), more than one independent variable (also known as a
predictor) or both. Upper-level undergraduate courses and graduate courses in statistics
teach multivariate statistical analysis. This type of analysis is desirable because researchers
often hypothesize that a given outcome of interest is effected or influenced by more than one
thing.

Types:
There are many statistical techniques for conducting multivariate analysis, and the most
appropriate technique for a given study varies with the type of study and the key research
questions. Four of the most common multivariate techniques are multiple regression
analysis, factor analysis, path analysis and multiple analysis of variance, or MANOVA.

Multiple Regression
Multiple regression analysis, often referred to simply as regression analysis, examines the
effects of multiple independent variables (predictors) on the value of a dependent variable,
or outcome. Regression calculates a coefficient for each independent variable, as well as its
statistical significance, to estimate the effect of each predictor on the dependent variable,
with other predictors held constant. Researchers in economics and other social sciences often
use regression analysis to study social and economic phenomena. An example of a
regression study is to examine the effect of education, experience, gender, and ethnicity on
income.

Factor Analysis
Factor analysis is a data reduction technique in which a researcher reduces a large number of
variables to a smaller, more manageable, number of factors. Factor analysis uncovers
patterns among variables and then clusters highly interrelated variables into factors. Factor
analysis has many applications, but a common use is in survey research, where researchers
use the technique to see if lengthy series of questions can be grouped into shorter sets.

Path Analysis
This is a graphical form of multivariate statistical analysis in which graphs known as path
diagrams depict the correlations among variables, as well as the directions of those
correlations and the "paths" along which these relationships travel. Statistical software
programs calculate path coefficients, the values of which estimate the strength of
relationships among the variables in a researcher's hypothesized model.

MANOVA
Multiple Analysis of Variance, or MANOVA, is an advanced form of the more basic analysis of
variance, or ANOVA. MANOVA extends the technique to studies with two or more related
dependent variables while controlling for the correlations among them. An example of a
study for which MANOVA would be an appropriate technique is a study of health among three
groups of teens: those who exercise regularly, those who exercise on occasion, and those
who never exercise. A MANOVA for this study would allow multiple health-related outcome
measures such as weight, heart rate, and respiratory rates.

Benefits
Multivariate statistical analysis is especially important in social science research because
researchers in these fields are often unable to use randomized laboratory experiments that
their counterparts in medicine and natural sciences often use. Instead, many social scientists
must rely on quasi-experimental designs in which the experimental and control groups may
have initial differences that could affect or bias the outcome of the study. Multivariate

techniques try to statistically account for these differences and adjust outcome measures to
control for the portion that can be attributed to the differences.

Considerations
Statistical software programs such as SAS, Stata, and SPSS can perform multivariate
statistical analyses. These programs are frequently used by university researchers and other
research professionals. Spreadsheet programs can perform some multivariate analyses, but
are intended for more general use and may have limited abilities than a specialized
statistical software package.

Principal component analysis (PCA) is a statistical procedure that uses an orthogonal transformation
to convert a set of observations of possibly correlated variables into a set of values of linearly
uncorrelated variables called principal components. The number of principal components is less than
or equal to the number of original variables. This transformation is defined in such a way that the first
principal component has the largest possible variance (that is, accounts for as much of the variability
in the data as possible), and each succeeding component in turn has the highest variance possible
under the constraint that it is orthogonal to the preceding components. The resulting vectors are an
uncorrelated orthogonal basis set. PCA is sensitive to the relative scaling of the original variables.
PCA was invented in 1901 by Karl Pearson,[1] as an analogue of the principal axis theorem in
mechanics; it was later independently developed (and named) by Harold Hotelling in the 1930s.[2]
Depending on the field of application, it is also named the discrete Kosambi-KarhunenLove
transform (KLT) in signal processing, the Hotelling transform in multivariate quality control, proper
orthogonal decomposition (POD) in mechanical engineering, singular value decomposition (SVD) of
X (Golub and Van Loan, 1983), eigenvalue decomposition (EVD) of XTX in linear algebra, factor
analysis (for a discussion of the differences between PCA and factor analysis see Ch. 7 of [3]), Eckart
Young theorem (Harman, 1960), or SchmidtMirsky theorem in psychometrics, empirical orthogonal
functions (EOF) in meteorological science, empirical eigenfunction decomposition (Sirovich, 1987),
empirical component analysis (Lorenz, 1956), quasiharmonic modes (Brooks et al., 1988), spectral
decomposition in noise and vibration, and empirical modal analysis in structural dynamics.
PCA is mostly used as a tool in exploratory data analysis and for making predictive models. PCA can
be done by eigenvalue decomposition of a data covariance (or correlation) matrix or singular value
decomposition of a data matrix, usually after mean centering (and normalizing or using Z-scores) the
data matrix for each attribute.[4] The results of a PCA are usually discussed in terms of component
scores, sometimes called factor scores (the transformed variable values corresponding to a particular
data point), and loadings (the weight by which each standardized original variable should be
multiplied to get the component score).[5]
PCA is the simplest of the true eigenvector-based multivariate analyses. Often, its operation can be
thought of as revealing the internal structure of the data in a way that best explains the variance in the
data. If a multivariate dataset is visualised as a set of coordinates in a high-dimensional data space (1
axis per variable), PCA can supply the user with a lower-dimensional picture, a projection or "shadow"
of this object when viewed from its (in some sense; see below) most informative viewpoint. This is
done by using only the first few principal components so that the dimensionality of the transformed
data is reduced.
PCA is closely related to factor analysis. Factor analysis typically incorporates more domain specific
assumptions about the underlying structure and solves eigenvectors of a slightly different matrix.

PCA is also related to canonical correlation analysis (CCA). CCA defines coordinate systems that
optimally describe the cross-covariance between two datasets while PCA defines a new orthogonal
coordinate system that optimally describes variance in a single dataset.[6][7]
What is principal components analysis?
Principal components analysis is a procedure for identifying a smaller number of uncorrelated
variables, called "principal components", from a large set of data. The goal of principal components
analysis is to explain the maximum amount of variance with the fewest number of principal
components. Principal components analysis is commonly used in the social sciences, market research,
and other industries that use large data sets.
Principal components analysis is commonly used as one step in a series of analyses. You can use
principal components analysis to reduce the number of variables and avoid multicollinearity, or when
you have too many predictors relative to the number of observations.

Example
A consumer products company wants to analyze customer responses to several characteristics of a new
shampoo: color, smell, texture, cleanliness, shine, volume, amount needed to lather, and price. They
perform a principal components analysis to determine whether they can form a smaller number of
uncorrelated variables that are easier to interpret and analyze. The results identify the following
patterns:

Color, smell, and texture form a "Shampoo quality" component.

Cleanliness, shine, and volume form an "Effect on hair" component.


Amount needed to lather and price form a "Value" component.

In Factor Analysis, the original variables are defined as linear combinations of the factors.
In Principal Components Analysis, the goal is to explain as much of the total variance in
the variables as possible. The goal in Factor Analysis is to explain the covariances or
correlations between the variables.

Uncertainty analysis
Uncertainty analysis investigates the uncertainty of variables that are used in decision-making
problems in which observations and models represent the knowledge base. In other words, uncertainty
analysis aims to make a technical contribution to decision-making through the quantification of
uncertainties in the relevant variables.
In physical experiments uncertainty analysis, or experimental uncertainty assessment, deals with
assessing the uncertainty in a measurement. An experiment designed to determine an effect,
demonstrate a law, or estimate the numerical value of a physical variable will be affected by errors due
to instrumentation, methodology, presence of confounding effects and so on. Experimental uncertainty
estimates are needed to assess the confidence in the results.[1] A related field is design of experiments.
Likewise in numerical experiments and modelling uncertainty analysis draws upon a number of
techniques for determining the reliability of model predictions, accounting for various sources of
uncertainty in model input and design.[2] A related field is sensitivity analysis.
A calibrated parameter does not necessarily represent reality, as reality is much more complex. Any
prediction has its own complexities of reality that cannot be represented uniquely in the calibrated
model; therefore, there is a potential error. Such error must be accounted for when making
management decisions on the basis of model outcomes. [3]

Error Analysis
Introduction
The knowledge we have of the physical world is obtained by doing experiments and making
measurements. It is important to understand how to express such data and how to analyze and draw
meaningful conclusions from it.
In doing this it is crucial to understand that all measurements of physical quantities are subject to
uncertainties. It is never possible to measure anything exactly. It is good, of course, to make the error
as small as possible but it is always there. And in order to draw valid conclusions the error must be
indicated and dealt with properly.
Take the measurement of a person's height as an example. Assuming that her height has been
determined to be 5' 8", how accurate is our result?
Well, the height of a person depends on how straight she stands, whether she just got up (most people
are slightly taller when getting up from a long rest in horizontal position), whether she has her shoes
on, and how long her hair is and how it is made up. These inaccuracies could all be called errors of
definition. A quantity such as height is not exactly defined without specifying many other
circumstances.
Even if you could precisely specify the "circumstances," your result would still have an error
associated with it. The scale you are using is of limited accuracy; when you read the scale, you may
have to estimate a fraction between the marks on the scale, etc.
If the result of a measurement is to have meaning it cannot consist of the measured value alone. An
indication of how accurate the result is must be included also. Indeed, typically more effort is required
to determine the error or uncertainty in a measurement than to perform the measurement itself. Thus,
the result of any physical measurement has two essential components: (1) A numerical value (in a
specified system of units) giving the best estimate possible of the quantity measured, and (2) the
degree of uncertainty associated with this estimated value. For example, a measurement of the width
of a table would yield a result such as 95.3 +/- 0.1 cm.
Probable Error Of The Coefficient Of Correlation :
The amount by which the arithmetic mean of a sample is expected to vary because of chance alone.
After the calculation of coefficient of correlation the next thing is to find out the extent to which ft is dependable.
For this purpose the probable error of the coefficient of correlation is calculated. The Theory of Errors forms part
of the Theory of Sampling and as such we shall not discuss it in detail here. However, in chapters on Sampling,
the Theory of Errors would be fully examined. At this place it is enough to write that if the probable error is
added to and subtracted from the coefficient of correlation it would give two such limits within which we can
reasonably expect the value of coefficient of correlation to vary. It means that if from the same universe another
set of samples was selected on the basis of random sampling, the coefficient of correlation between the two
variables in this new sample would not fall outside the limits so established.
The formula for finding out the probable error of the Karl Pearson's coefficient of correlation is:
Probable error of coefficient of correlation

P.E. = .6745 (1-r2)/n = .6745 (1-(.8)2)/100


= .6745 (.36)/10 = (.24282)/10
(i)

= .0243

It means that in the universe the limits of the coefficient of correlation should be r P.E. or .8 .0243 or .

7757 and .8243.


(ii)

If the value of r is less than the probable error, there is no evidence of correlation.

(iii)

If the value of r is more than six times of the probable error it is significant correlation.

(iv)

If the probable error is not much and if the coefficient of correlation is .5 or more it is generally

considered to be significant.
(v)

Probable error as a measure for interpreting coefficient of correlation should be used only when the

number of pairs of observations is large. If n is small probable error may give misleading conclusions.
(vi)

Probable error as a measure for interpreting coefficient of correlation should be used only when a

sample study is being made and the sample is unbiased and representative. With the help of sample studies,
the correlation in the universe is estimated.

The Probable Error of the Standard Deviation

You might also like