Professional Documents
Culture Documents
A. Identification or Modeling:
Figure 3 is illustrating the identification
problem. Here the same input x(n) is fed to the
adaptive filter as the system. y(n) is the adaptive
filter’s output. Output of adaptive filter is then
compared with the preferred response d(n).This
comparison generates something which is
known as error . Error is then used to amend the
w(n) weight for the reduction of error in order to Figure 5: Predictor Adaptive Filter
identify the system.
D. Noise Cancellation:
Noise cancellation is the main focal point of this
research paper. The idea behind this application
is following
let d(n) is a preferred output tarnished by a
noise n2(n) and is said to be main signal.
Main signal = s(n) + n2(n)
Comparison of d(n) is then done with y(n) ,input
fed to adaptive filter is a reference signal n1(n).
this reference signal also produces noise which
Figure 3: System Identification or Modeling
tarnishes the main signal. e(n) is the system
output which is actually the comparison of
B. Inverse Modeling:
filter output and the preferred output .For best
Figure 4 illustrates inverse modeling, the.
state, this error will be equal to the original
Inverse modeling is also known as
signal .
deconvolution. Its goal is to find out and follow
the inverse transfer function of the system.
system receives an input x(n) and its output
u(n) is fed to the adaptive filter .when a delay is
given to the input x(n) we get preferred output
which is d(n).Then we compare the filter output
y(n) and the desired response d(n). To update or
correct the weights of filter error is used .
Learning rate μ assures the stability of the RLS algorithm reduces a weighted linear least
algorithm and makes it more effective. squares error. The RLS algorithms gives
Sensitivity to the scaling of inputs makes pure outstanding performance in non stationary
LMS less effective because it makes selection of situation.
learning rate μ difficult which is a drawback.
The Normalized least mean squares (NLMS) VIII. MATLAB EXPERIMENTS
filter is a modification of the LMS algorithm. It
normalizes with the power of the input. A. LMS algorithm
NLMS algorithm outline: Some alterations has been done to figure 6 and
Parameters: P = filter order summary for the LMS to attain the interference
μ = step size cancelling.
Initialization: ĥ (0) = 0 The figure 7 shows the result of usage of LMS
Computation: For n = 0, 1, 2... algorithm to intrusion problem, where s(n) is
X(n) = [x(n), x(n - 1), …, x(n – p + 1)]T the input, the preferred signal x(n)=s(n) + n2(n)
e(n) = d(n) – ĥH (n) X(n) and the error signal, which should be equal to
ĥ(n+1) = ĥ(n) + μ e∗(𝑛) X(𝑛)/ XH(𝑛) X(𝑛) the input signal s(n). In figure 7,the step-size
parameter μ is se equal to 0.0002 and the length
Optimal Learning Rate: of adaptive filter has kept 5. The input signal
s(n) is shown in blue. The input signal After the
if there is no intrusion [v(n) = 0], then the noise corruption s(n) + n2(n) is shown in green
optimal learning rate for the NLMS algorithm is color, and the error signal e(n) is shown in red.
μopt = 1
and is free of the input X(n) and the real L = 5, μ = 0.0002
(unfamiliar) impulse response h(n). In the
general case with intrusion v(n) does not equal
to 0, the most favorable learning rate is
μopt = 𝐸 [│𝑦 (𝑛) –y^(𝑛) │2]/𝐸 [│𝑒(𝑛)│2]
We assume that the signals v(n) and X(n) are
uncorrelated to each other.
L = 5, δ = 3.2, μ = 0.005
L = 5, λ = 1
REFERENCES
I. CONCLUSIONS