You are on page 1of 3

4.

a) Estimating parameters in dynamical systems


Estimation theory is a branch of statistics that deals with estimating the values of
parameters based on measured empirical data that has a random component. The parameters
describe an underlying physical setting in such a way that their value affects the distribution
of the measured data. An estimator attempts to approximate the unknown parameters using
the measurements.
For example, it is desired to estimate the proportion of a population of voters who will vote
for a candidate. That proportion is the parameter sought; the estimate is based on a small
random sample of voters.
Or, for example, in radar the goal is to estimate the range of objects (airplanes, boats, etc.) by
analysing the two-way transit timing of received echoes of transmitted pulses. Since the
reflected pulses are unavoidably embedded in electrical noise, their measured values are
randomly distributed, so that the transit time must be estimated.
. Least squares method can be used to estimate parameters in models of dynamical systems.
the way to do this will depend on the character of the model and its parametrization
Finite-Impulse response (FIR) Models:
A linear time-invariant dynamical system is uniquely characterized by its impulse
response. The impulse response is in general infinite-dimensional stable systems the impulse
response will go to zero exponentially fast and may then be truncated. However, that many
parameters may be required if the sampling interval is short in comparison to the slowest time
constant of the system. This results in the so called finite impulse response (FIR) model,
which is also called a transversal filter. The model is given by the equation
Equation (2.33)
Where
This model is identical to the regression model. The parameter estimator can be
represented by the block diagram. the estimator may be regarded as a system with inputs u
and y and output. Since the signal is available in the system, we can also consider y(t) as an
output .
Figure (2.3)

Transfer function models :


Transfer function models describe the relationship between the inputs and outputs of a
system using a ratio of polynomials. The model order is equal to the order of the denominator
polynomial. The roots of the denominator polynomial are referred to as the model poles. The
roots of the numerator polynomial are referred to as the model zeros.

The parameters of a transfer function model are its poles, zeros and transport delays.
Continuous-Time Representation
In continuous-time, a transfer function model has the form:
Y(s)=

num(s)den(s)

U(s)+E(s)

Where, Y(s), U(s) and E(s) represent the Laplace transforms of the output, input and noise,
respectively. num(s) and den(s) represent the numerator and denominator polynomials that
define the relationship between the input and the output.
Discrete-Time Representation
In discrete-time, a transfer function model has the form:
1 u(t)+e(t)num(q1)=b +b q1+b q2+
0 1
2
num(q )den(q )
den(q1)=1+a1q1+a2q2+
y(t)=

The roots of num(q^-1) and den(q^-1) are expressed in terms of the lag variable q^-1.

Non-linear models :
least squares can be applied to certain nonlinear models .the essential restriction is
that the models be linear in the parameters so that they can witten as linear regression
models. A nonlinear model describes nonlinear relationships in experimental data. Nonlinear
regression models are generally assumed to be parametric, where the model is described as a
nonlinear equation. Typically machinelearning methods are used for non-parametric
nonlinear regression.
Parametric nonlinear regression models the dependent variable (also called the response) as a
function of a combination of nonlinear parameters and one or more independent variables
(called predictors). The model can be univariate (with a single response variable) or
multivariate (with multiple response variables).
The parameters can take the form of an exponential, trigonometric, power, or any other
nonlinear function. To determine the nonlinear parameter estimates, an iterative algorithm is
typically used
Stochastic models:
A stochastic model is a tool for estimating probability distributions of potential
outcomes by allowing for random variation in one or more inputs over time. The random
variation is usually based on fluctuations observed in historical data for a selected period
using standard time-series techniques. The least-squares estimate is biased when it is used on
data generated. Where the errors are correlated .

b) Experimental conditions
The properties of the data used in parameter estimation are crucial for the quality of
the estimates. For example, it is obvious that the useful parameter estimates can be obtained
if all signals are identically zero. The influence of the experimental conditions on the quality
of the estimates .in performing system identification automatically as in an adaptive system,
it is essential to understand these conditions, as well as the mechanisms that can interface
with proper identification. the notion of persistent excitation, which is one way to
characterize process inputs, is introduced .in an adaptive system plant input is generated by
feedback. consider the estimation of parameters in a FIR model given by the following
equation. the parameters of the model cannot be determined unless some conditions are
imposed on the input signal.it follows from the condition for unique needs of the leastsquares estimate given such that the minimum is unique, if the matrix has full rank. This
condition is called excitation condiotn.
Equation (2.42)
For long data sets all sums in above equation can be taken from 1 to L we then get
Equation (2.43)
Equation (2.44)
Persistant excitation : A signal u is called persistently exciting of order n if the limits
exist can if the matrix C is positive definite.

You might also like