You are on page 1of 10

1574

IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 45, NO. 6, JUNE 1997

Modeling in issing Observati


Sina Mirsaidi, Gilles A. Fleury, and Jacques Oksman The purpose of this paper is to present a new recursive algorithm for both time- and frequency-domain analysis. The signal is periodically sampled with some missed observations so that the discrete data are located at instants that are multiples of the sampling period. Most of the methods dealing with this kind of unequally spaced data use an amplitude modulating sequence to describe the missing patterns, as originated by Parzen [161. The bibliography lists a number of such references [ 171-[19]. We adopted a parametric modeling approach. We will assume that the pattern of irregular spacing is unrelated to the stochastic properties of the process being sampled. The process is represented by an autoregressive (AR) model that provides two advantages: First, AR models are flexible enough to represent a large variety of signals with various power spectral density (PSD) functions, and second, in the time-domain reconstruction framework, they facilitate, at each instant, the prediction of future (or missed) samples so that real-time implementations are strongly simplified. The algorithm presented here is an extension of stochastic gradient-type methods to the nonperiodic case, where the cost function is the mean-square prediction error at the instants where samples are available. The value of the missed samples are calculated using a recursively estimated model. The main advantage of the proposed method over existing techniques is that it is not restricted to stationary cases and that it can be used in adaptive contexts where the AR parameters are time varying. In Section 11, some important concepts of the optimal AR predictor are reviewed. The derivation of the algorithm is discussed in detail in Section 111. The conditions of convergence of the algorithm together with some open questions and future perspectives are discussed in Section IV. Section V presents various examples and simulation results. 11. BRIEFREVIEW OF THE OPTIMAL AR PREDICTOR We suppose that signal {yk} is modeled by an AR process, whereas certain signal samples, as well as an estimate of the AR parameters, are known at instant t,. The goal is to find the optimal k-step-ahead linear predictor based on the signal model and the available data, where I; is an arbitrary positive integer. The optimality is in the mean square sense so that the following quantity has to be minimized:
J = E{(Yt,+kT - Y t , + k T l b d 2 } .

Abstract- This paper presents,a new recursive algorithm for the time domain reconstruction and spectral estimation of uniformly sampled signals with missing observations. An autoregressive (AR) modeling approach is adopted. The AR parameters are estimated by optimizing a mean-square error criterion. The optimum is reached by means of a gradient method adapted to the nonperiodic sampling. The time-domain reconstruction is based on the signal prediction using the estimated model. The power spectral density is obtained using the estimated AR parameters. The development of the different steps of the algorithm is discussed in detail, and several examples are presented to demonstrate the practical results that can be obtained. The spectral estimates are compared with those obtained by known AR estimators applied to the same signals sampled periodically. We note that this algorithm can also be used in the case of nonstationary signals.

I. INTRODUCTION RREGULARLY sampled data can occur in two distinct ways: The data can be equally spaced with missed observations, or it can be truly unequally spaced with no underlying sampling interval. Irregularly sampled signals are much harder to analyze than uniformly sampled ones. Shapiro and Silverman [ l ] introduced the concept of alias-free sampling, and Beutler [2] extended some of their results. Masry [3]-[7] did extensive research on spectrum estimation of unequally spaced data and obtained some theoretical results about the asymptotical behavior of some spectral estimates. The book by Parzen [SI brought together much of the expertise in this field and provides valuable references to earlier and on-going work. The possibility of parametric approach was considered by Robinson [9] and Jones [lo], [ l l ] . Jones [lo], [ l l ] developed a recursive method based on state-space representation and Kalman filtering for both time- and frequency-domain analysis. The concept of on-line time-domain reconstruction of some irregularly sampled processes is studied by Bensaoud and Oksman [12]. They developed some parametric methods for the real-time reconstruction of these types of sampled signals. Oksman [13], [14] discussed the problem of modeling nonperiodically sampled signals and explained some applications of this kind of sampling. Rozen and Porat [15] developped a method of ARMA parameter estimation for data with missing observations. This method is based on sample covariances and can only be used in stationary environments.
Manuscript received July 29, 1995, revised January 20, 1997. The associate editor coordinating the review of this paper and approving it for publication was Prof Allan Steinhardt The authors are with the kcole SupCrieure dklectncit6 (SUPELEC), Gifsur-Yvette, France. Publisher Item Identifier S 1053-587)3(97)04212-8

(1)

In (l), T stands for the period of sampling, and yt,+kT represents an estimate of Yt,+kT. E is the expectation operator. In an AR modeling context, it is well known [20] that one can recursively obtain the estimate it% using the previous + k ~

1053-587X/97$10.00 0 1997 IEEE

MIRSAIDI et al.: LMS-LIKE AR MODELING IN THE CASE OF MISSING OBSERk'ATIONS

1515

estimates Ctn+(k--l)T, follows:

. . . ,yt,+(k-M)T
M

and the AR models as

or etn+l = Yt,+l - Htn+l'tn+l where

(11)

z=1 That means that at each instant t,+1, in order to obtain the value of Ct,+l, one has to use recursion 2 for k = 1 , 2 , . . . ,tn+l - t,. This recursive k-step linear predictor is optimal in the mean square sense. In the following section, we will use this iterative approach of linear predictors for the development of our algorithm.
OF 111. EXTENSION THE STOCHASTIC GRADIENT-TYPE METHODS THE NONPERIODIC TO CASE

HEtl = [htl I . . . Ihtn+1l.

(12)

Returning to (6), which defines the cost function, we may rewrite this expression in terms of estimation error e:

which is equivalent to

As mentioned previously, we suppose that the periodically sampled version of the signal y(t) is modeled by an AR process as follows:
yn = eTYn

(3)

where BT = [el, . . . , O M ] is the AR vector parameters, yz = [yn-l, * ,y n - ~ ] , and {wn} is a zero mean white noise For process with variance c,". convenience and without loss of generality, we assume unity spacing of samples prior to modification of the sampling train. The sampling is a nonuniform one. It is, in fact, a uniform sampling that has been subjected to random skipping or deletion of some samples. In order to evaluate the AR parameters, we suppose that the model is estimated at an arbitrary instant t,. At each following instant, if the sample is lost, we estimate it by means of the last estimate of the AR parameters. Otherwise, we use the new sample and its estimate to update the model parameters. After a certain time (convergence of the algorithm), we can use the estimated AR parameters to reach the PSD of the signal. Let us consider { t l ,t Z , . . . , t,} to be the set of instants when the samples are not lost. Let the M x 1 vector h: = [ h t t - l , . . . , h t , - M ] be defined by

Vector 0 must be chosen to minimize J t n + l . We want to obtain an iterative equation of evaluation of 0 such as

where p is the step-size parameter, and gt,+l is the gradient vector obtained by differentiating the cost function with respect to the vector of AR parameters 0. We define Jl,tn+l and J2,tn+l as

ht%-.j=

We may now express the estimation error at instant t, as


et%= ytt - Gtt = ytt
-

yt,-j
&--j

if yt,--j is not lost otherwise.

Equation (14) can now be written as (4)


JL+l

= ntl(lt,+lYtn+l

'

- J%+l

T Jl,tn+l

+J2,tn+J
(17)

eThtz fori = I,..., n

(5)

Differentiating Jtn+l respect to 8, we obtain with

with n being the current instant. The cost function is chosen to be as follows:
ntl
Jtn+l

=-

n+l

l%I2

(6)
Comparing (15) and (18), we obtain

where n 1 is the variable length of the observable data. et$ and yt, may be viewed as the elements of some n x 1 vectors e and y such as
T - [ut,> * . . Y t n + J = [et, ,. . . > % % + , I .
3

,=1

(7)

The next step is to calculate gtntl in a recursive manner

el : +
T

(8)

We may then rewrite ( 5 ) in matrix form: e+ :l =ytn+l - e ~ + l [ h t l l . . * I h t n + l l

in order to evaluate vector B as in (15). This is a much more complicated task than in the periodic sampling case; it can be seen from (2) that in k-step prediction with k > 1, the predicted values are not linear functions of 0. Therefore, matrix H contains elements depending on B that contribute to the computation of the gradient vector. To proceed with the

1516

IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL 45, NO 6, JUNE 1997

Summary of the Proposed Algorithm

Fig. 1. Probability distribution function of the instantaneous sampling period for p = 0.6.

TABLE I TIME DOMAIN RECONSTRUCTION PERFORMANCE: = 10 log (signal S/N power/estimation error power), p : EXPECTED VALUE THE RATIO: OF number of observed sampleshotal number of samples. (a) EXAMPLE (b) EXAMPLE 1. 2
P

SIN (dB)
17.45 19.56

SIN (dB)
14.47 15.14 15.76 16.8 (b)

0.6
0.7

0.6 0.7

0.8

20.83
22.29 (a)

0.8

IV. CONVERGENCE CONDITIONS


The basic idea of the algorithm proposed in the previous section is very similar to that of the LMS algorithm. As is known, the stability condition of the LMS algorithm is
1 O M x total input power

evaluation of vector gt,+, , we will use some differentiating rules that largely simplify the development of the algorithm. These rules, together with the details of development of the recursive equations, are explained in the Appendix. A summary of the algorithm is presented below by collecting together the recursions derived in the Appendix. M x 1 vectors p , q , U , v together with M x M matrices B , IC,, Kl,, are intermediate variables, and &:tn+l is the estimation of the AR modeling noise power. The definitions of all these variables are given in the Appendix. The notation ( ), stands for the ith element of a vector. All vectors and matrices with index i correspond to this element. Therefore, each updating equation that contains these variables must be used for i = 1,. . . , M , that is, for all corresponding vector elements. In the updating equation for f z , t , t l , each element of vector ah:,, is updated using the recursive equation

(201

/aQ,

The algorithm can be started with zero initial conditions for all the variables. This is especially recommended for vector 6 in the case of large model orders. The choice of initial conditions with large absolute values increases the risk of instability in the first steps.

where M is the AR model order. The LMS algorithm replaces at each inst gradient-a statistical expectation-with gradient that is a random vector. That is why this algorithm is noisy and slowly descending. The choice of a more rigorous cost function [21], i.e., the mean-square error, decreases the variance of the estimated parameters. As we use the global information for updating the parameters at each step, the gradient vector is much more reliable than the one used in LMS, and the direction of descent is much better pointed to the optimal target. These advantages give the possibility of choosing larger descent steps and, consequently, a faster convergence, especially in the periodic sampling case. In the periodic sampling case, as is reported in [21], this algorithm exhibits an IUS-like fast adaptation. In the nonperiodic sampling case, especially when the number of lost samples is high, use of a step-size parameter p with greater value than in (20) is not recommended. Simulation results have shown that the choice of p as in (20) is largely sufficient to guarantee the convergence of this algorithm, and

MIRSAIDI ef al.: LMS-LIKE AR MODELING IN THE CASE OF MISSING OBSERVATIONS

1517

the results obtained are in good agreement with the case of periodic sampling. Obtaining the precise theoretical value of p that guarantees the stability of the algorithm (even in the periodic sampling case) is a formidable task. The estimates of the lost samples are nonlinear functions of the AR parameters, and the degree of the nonlinearity is more or less random because of the random length of the sampling intervals. This fact increases the complexity of the algorithm with respect to the periodic case. Obtaining a faster algorithm based on this idea may be a fruitful direction for further research. Several numerical examples have been considered with the choice of p as in (20). The results obtained are satisfactory and in good agreement with those of the periodic sampling case.
OO 200
400
6M)

800

TIME

1000

1200 1400 1600

1800

uxx)

V. SIMULATIONS In this section, we study the performance of the proposed algorithm both in time domain and frequency domain reconstruction. We use a random sampling scheme. It is supposed that each sample has the probability q = 1- p of being lost and that these skips occur independently. If we assume a sampling frequency of f , , the average rate of sampling is then reduced to p f , , which is below the Nyquist rate. Fig. 1 shows an example of the distribution of the lengths of the intervals between two consecutive available samples for p = 0.6 and f , = 1. We see that about 65% of the time, the instantaneous sampling period does not respect the Shannon condition. This sampling scheme has been chosen because it is the most realistic one in practical situations, and it should be noted that all kinds of periodic sampling with missing observations could be handled by this algorithm.
A. Time-Domain Reconstruction In order to test the performance of the algorithm in timedomain reconstruction, we have considered two different types of signals. The sampling period in each case is T = 1. For each test signal, the ensemble-averaged squared error and the average estimated amplitude are obtained over 100 realizations of independent sampling processes. Example 1: We consider a lowpass AR(2) process
3
i=l

with the following parameters:


a1

=1

a2

= -1.89

a3

= 0.9

(C) Fig. 2. Time-domain reconstruction performance of the algorithm (Example 1). (a) Ensemble-averaged squared error. p = 1: , p = 0.8: (-.-.-.}, p = 0.7: ---, = 0.6: ............ (b) Average estimated amplitude. (c) p Estimated amplitude for one realization of the sampling process. Original signal: , estimated signal: (............ }, observed samples.

where {wn} is a zero mean white Gaussian noise with variance a& = 1. In order to evaluate of the quality of signal reconstruction, we define the following signal-to-noise ratio (SNR): signal power SNR = 10 log (21) estimation error power The step-size parameter value is obtained by

++:

O<

<

1 M x total input power

with M = 2.

(22)

1578

IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 45, NO. 6, JUNE 1997

1
01 .

0.05

lo-'

-0.05

-0.1

-0.2 1120 1122 1124 1126

1128 1130 1132 1134 1136

nm
(dl

1138 1140

(c)

Fig 3 Passband Butterworth filter f o = 0 3 , A f = 0 005 (a) Signal (b) Spectrum (c) Ensemble-averaged squared error p p = 0 7 - - - - , p I0 6 (d) Average estimated amplitude Onginal signal , estimated signal { }

=1

,p = 0 8 { - - -1,

Fig. 2(a) shows the behavior of the ensemble averaged squared error as a function of time and for different values of p . We note that the speed of convergence is the same as in the periodic case. However, the steady-state value of the averaged error is greater in the nonperiodic case. Table I(a) summarizes the performance of the time-domain reconstruction of the algorithm. The ensemble-averaged estimated amplitude is shown in Fig. 2(b). Fig. 2(c) shows the reconstructed signal for only one try of the sampling process. As we can see in Table I(a), in this example, a 10% loss in the samples is approximately equivalent to a 1-dB degradation in the SNR. Example 2: The second test signal is a passband one. Its spectrum is given by the transfer function of a first-order Butterworth filter with a sharp peak. Fig. 3(a) and (b) shows the signal and its spectrum. The results of the simulation are illustrated in Fig. 3(c) and (d) and Table I(b).

B. Spectral Estimation
In this section, we study the performance of the spectral estimation in single- and multipeak cases. The first two exam-

ples are sharp peak spectra. In the third example, a spectrum with a larger bandwidth is considered. In the last example, a more complicated spectrum representing a vowel spoken by a male speaker is studied. In each case and as a reference for comparison, the spectral estimates obtained by an AR estimator in the periodic case is also given. The approach used in the periodic case is the modiJied covariance method, where the sum of a least-square criterion for a forward model and the analogous criterion for a time-reversed model is minimized 1221. For each test spectrum, 100 realizations of the spectral estimator are obtained. Each one corresponds to an independent sampling process. The average of the 100 estimates together with the PSD obtained by the forward-backward approach are shown for different values of p and M . In each case, the total number of observed samples is N = 2000. Example 3: The test spectrum, as mentioned previously, is given by the transfer function of a narrowband first-order Butterworth filter with a sharp peak at f o = 0.3 and a bandwidth of Af = 0.005.

MIRSAIDI et al.: LMS-LIKE AR MODELING IN THE CASE OF MISSING OBSERVATIONS

1579

0h

zj ;-lon-m-

$ #
5
2

-30 -40
-50

: .

-601 0

'

00 .5

0.1

0.15

02 .

0.25

03 .

0.35

0.4

0.45

\ I

0.5

FREQUENCY

Fig. 4. Spectral estimation of single peak spectra, Butterworth filter with f o = 0.3, and Af = 0.005;N = 2000.

2. We Fig. 4 shows plots of the estimated spectra for M see that in each case, the frequency of the sharp peak is well estimated. In the passband of the filter, there is no significant bias between the estimates in the periodic and nonperiodic cases. We note that if the only goal is the detection of the sharp peaks, M = 2, and 50% of the samples are largely sufficient; otherwise, one has to choose greater values for M in order to have much more precision in the spectral estimation. Example 4: The average performance of the estimation of spectra that have more than one peak is investigated here. This example is based on the assumption that the peaks represent two zero mean signals from independent processes. The spectra are transfer functions of first-order Butterworth filters with the following parameters:
f o l = 0.3 IAfl = 0.005

/ah

f o ~ 0.35 = = 0.005

There is approximately a 15-dB difference between the amplitudes of the sharp peaks. Fig. 5(a)-(c) shows plots of the estimated spectra (periodic and nonperiodic cases) for different values of M and p . Example 5: The performance of the AR spectral estimator for spectra with larger bandwidth is investigated in this example. The test spectrum is the same as in example 3 but with a larger bandwidth of A f = 0.05. Plots of the spectral estimates are shown in Fig. 6(a) and (b). Example 6: In this example we study the spectrum of the vowel "i" (in French) spoken by a male speaker. This signal is sampled at 9600 Hz. Fig. 7(a) and (b) show the results of the simulation. We note that we have more detailed spectral estimates for greater values of M . Example 7: In this example, the average performance of spectral estimation of the proposed method is compared with that of Jones's algorithm [ll]. The test signal is an AR(2) process with parameters BT = [l -0.3 0.51 and q = 0.4. Fig. 8 shows the results obtained. As an index of performance, we consider the frequency deviation between the peak frequency of the original PSD ( f p ) and that of each estimated PSD (f p ) . This value is then normalized by the peak frequency of the

FREQUENCY
(C)

Fig. 5. Spectral estimation of multipeak spectra, Butterworth filters with f 0 , 1 = 0.3,fo.z = 0.35, and Af = 0.005;N = 2000. Original PSD: , estimated PSD (periodic sampling): {-.-.}, estimated PSD (nonperiodic sampling): { .....} (a) M = 10,p = 0.6. A'f = 2 0 , p = 0.6. (b) ( c ) M = 2 0 , p = 1.

1580

IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 45, NO. 6, JUNE 1997

101

0h

;-lo-

n-mb

*!

0 . k

01 1

0.k

02 : 013 FREQUENCY

0k .

0.k

014

0.15

OS !

A*

O k

0:1

0.k

0:2

013 FREQUENCY

0k .

0.k

0:4

0.15

O!S

(a)

1
l0r-----I

-50

0.05

0.1

0.15

0.2

FREQUENCY

0.25

0.3

0.35

0.4

0.45

0.5

-70

0.05

0.1

0.15

0.2

0.25 0.3 FREQUENCY

0.35

0.4

0.45

0.5

(b) Fig. 6. Spectral estimation of larger bandwidth spectra, Butterworth filter with fo = 0.3 and Af = 0.05.(a) M = 5. (b) M = 20.

(b) Fig. 7. Spectral estimation of the vowel 1 spoken by a French male speaker Ai = 2000. Penodic sampling. , nonperiodic sampling { . .. . } (a) Af = 2 0 , p = 0 6 (b) M = 40,p = 0 6

. . . . onginai signal. I ne normalizea aeviations omainea are


1 . 1 m .

1.

Joness method: proposed method:

11 9
I
d p

= A f = 0.036

I
T

We note a very similar behavior of both methods for this test . ,. r . . . signal. ivioreover, as mentionea Derore, Jones > s memw can
1 s

.
Y

- 3

nnlv hP i i w d in hlnrk-mndp nneratinns whereas this method v-r ------ - ~ - - - 1, . _-__ ~ __.__
I l Y I I

^^^

I_--

__

_ _ _ _ _ _ I

can be classified in real-time algorithms. Discussion: From the previous and numerous other exam..lam

plm,

tho C,.lln.x7:*m
LIIG

l u l l u w l l l ~*n:-tc J tJUl11L

alb I l U L b U .

-,-nntarl.

* The performance of the AR estimators in both the periodic

and nonperiodic cases are similar, especially in the more . ,. 2 . ..._ . _ inrormative zones 01 L e spectrurri. n In the nonperiodic case, a residual noise level is observed in the spectral estimates. This becomes more pronounced as the number of lost samples increases. The level is situated at -40 dB for the signals in examples 4 and 6.
r
rl
I - .

The reason is obviously the lack of information from the signal. In order to have the same fidelity in the spectral estimates as in the periodic case, the only solution is to choose greater model orders. For both the periodic and nonperiodic cases, a higher order AR estimator is needed to resolve neighboring spectral peaks with the same fidelity as in the singlepeak case. For peaks with greater amplitude ratios, higher model orders should be used. In the case of spectra with larger bandwidths, one must choose larger model orders in order to have spectral estimates with the same fidelity as in the case of sharp peak spectra. This is because the AR models are less adapted to these kinds of spectra than those with sharp peaks.

VI. CONCLUSION A new recursive method for time- and frequency-domain reconstruction of nonperiodically sampled signals is described.

MIRSAIDI et al.: LMS-LIKE AR MODELING IN THE CASE OF MISSING OBSERVATIONS

1581

respect to 8 can be formulated as

Rule 2: Let J be a scalar-valued function of an M x 1 vector 8:

-4
-$
0.b5

J = OTF8
0:l
0.k
012

03 : FREQUENCY

0.k

045

014

0.k

0!5

where F is an M x M matrix with elements depending on 8. The derivative of J with respect to 8 can be formulated as

Fig. 8. Spectral estimation: Original PSD: , estimated spectra (Jones' method): { .....}, estimated spectra (proposed method): {-.-.}

The performance of the method is studied through comparison with conventional methods in the periodic case. Several examples of different types of signals are considered. The simulation results show that the method behaves well even in the cases where 40 to 50% of the samples are lost. The main advantage of the proposed method over existing techniques is that it can be used in adaptive contexts where tracking of parameter variations is necessary. Using mean-square error as a cost function results in a faster convergence with fewer noisy parameter estimates than those of the well-known LMS algorithm. Detailed demonstrations of some theoretical aspects of the algorithm cannot yet be found, even in the periodic sampling case. Further studies in the area of fast computations seem worthwhile. Nevertheless, we think that this method can be used as a basic tool for the development of different types of adaptive algorithms for the parametric identification of signals with missing observations. We note also the possibility of applying the proposed algorithm to nonstationary environments by introducing a forgetting factor in the cost function. This aspect will be studied in detail in a subsequent paper. As a final remark, we outline that this method can be used in any signal processing, control, or other applications where data compression is needed.

Rule 3: Let h, denote the same functional quantity as in Section 11. The derivative of hi with respect to vector 8 is obtained as

Recursive Computation of dJl,tn+l /at?


and F = Htn+, and replacing Using Rule 1 with a = 8 by its last estimate we obtain

et,,

where

APPENDIX

Differentiating Rules
Here are some differentiating rules used for the development of the algorithm. Rule 1: Let J be a scalar valued function of an M x 1 vector 8 with the following expression:

and

1 %+, - - + ,HtT,+lYtn+l. -

('4.3)

J = aTF8
where a is an M x 1 constant vector, and F is an M x A 4 matrix with elements depending on 8. The derivative of J with

In order to find a recursive equation for ptntl, we suppose that (ptn+l)iis the ith element of this vector, and we replace 0 by its last estimated value, namely, et, in (A.2) so that we have

1582

IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL 45, NO. 6, JUNE 1997

We now define the intermediary vector

fi,t,

as

Using a partitioned form of the matrix Ht,+l, we get the following recursion:

Using the partitioned form of yt,+, and the matrix Htntl, we may rewrite (AS) as We now define the vector
'Utnfl

(A.12)
as

1
%+l

=-

n+l

(A. 13)

and we obtain the following recursion:

To obtain dh:+, i = 1, . . . , M , in (A.6) in an iterative manner, one can apply Rule 3 to each element of the vector htntl. For the term Q ~ , + ~ , following recursion is easily the obtained: (A.@ This completes the first part of the algorithm used to update the term a J ~ , t n + l / d O .
Recursive Computation of d J2,t,+l /dl9

/ae,

and as in the previous section, we try to find a recursion for the ith' element of this vector:

(A.14)
We suppose

Differentiating K2,tn+l with respect to Q,, we obtain

We recall the expression of


T
J2rtntl

J2,tn+l:

= e Htn+iHtn+l'.

Using Rule 2 in the Appendix with F = HE+lHt,+l and

(A.16) We now define

a = 8, replacing 0 by its last estimate that FT = F , we obtain

et,,

-T

and considering

SO

that (A.16) may be rewritten as

K,J,%+l K2,l,tn+, (Kz,l,tn+l)T. = +


Finally, using the partitioned form of the matrix Ht,+l in (A.17), we can write

and get the following recursion We define a vector ut,+l and an intermediary matrix Bt,+l as
W,+l = H,Tn+iHtn+I'tn
&+l

= nfl H,Tn+lHt,+l.

1 n+l 1

(A.lO)
(A. 11)

This completes the second part of the algorithm used to update dJ2,tn+l/ d e . The variance of the modelization noise can be computed as the power of the estimation error at the instants t l . f . ,t,. This would be the best estimation as this

MIRSAIDI ef al.: LMS-LIKE AR MODELING IN THE CASE OF MISSING OBSERVATIONS

1583

error is unknown at the instants corresponding to the skipped samples. The recursion formula is
n+1

--

n E leiI2 = ,,din

+-

1
+

&+, .
(A. 19)

[I91 W. Dunsmir and P. M. Robinson, Estimation of time series models in the presence of missing data, J. Amer. Stat. Assoc., vol. 76, pp. 560-568, 1981. [20] G. C. Goodwin and K. S. Sin, Adaptive Filtering, Prediction, and Control. Englewood Cliffs, NJ: Prentice-Hall, 1984. [21] K. M. Tao, Statistical averaging and PARTAN-Some alternatives to LMS and RLS, in Proc. ICASSP, 1986, vol. 4, pp. 25-28. [22] S. L. Marple, Digital Spectral Analysis with Applications. Englewood Cliffs, NJ: Prentice-Hall, 1987.

REFERENCES
H. S. Shapiro and R. A. Silverman, Alias-free sampling of random noise, J. Soc. Indust. Appl. Math., vol. 8, pp. 225-248, 1960. F. J. Beutler, Alias-free randomly timed sampling of stochastic process, IEEE Trans. Inform. Theory, vol. IT-16, pp. 147-152, Mar. 1970. E. Masry, Random sampling and reconstruction of spectra, Inform. Contr., vol. 19, pp. 275-288, 1971. -, Poisson sampling and spectral estimation of continuous process, IEEE Trans. Inform. Theory, vol. IT-24, pp. 173-183, 1978. E. Masry, D. Klamer, and C. Mirabille, Spectral estimation of continuous-time processes: Performance comparison between periodic and poisson sampling schemes, IEEE Trans. Automat. Contr., vol. AC-23, pp. 679-685, 1978. E. Masry and M. C. Lui, A consistent estimate of the spectrum by random sampling of the time series, SIAM J. Appl. Math., vol. 28, pp. 793-810, 1975. - Discrete time spectral estimation of continuous parameter , processes-A new consistent estimate, IEEE Trans. Inform. Theory, vol. IT-22, pp. 298-312, 1976. E. A. Parzen, Times series analysis of irregularly observed data, Lect. Notes Stat., vol. 25, 1984. P. M. Robinson, Estimation of a time series model from unequally spaced data, Stoch. Processes Appl., vol. 6, pp. 225-248, 1977. R. H. Jones, Fitting a continuous time auto regression to discrete data, Applied Times Series Anal. 11, pp. 651-682, 1981. R. H. Jones, Maximum likelihood fitting of ARMA models to time series with missing observations, Technometrics, vol. 22, no. 3, pp. 389-395, 1981. 0. Bensaoud and J. Oksman, Reconstruction en temps rCel des signaux A Cchantillonnage non pkiodique, Traitement Signal, vol. 11, no. 3, pp. 283-293, 1994. J. Oksman, Modele gCnCral des signaux A Cchantillonnage non pCriodique, Tech. Rep., Ecole SupCrieure dElectricitC, Gif-sur-Yvette, France, 1994. , ESPRIT project, Slopsys~ Adaptive sampling, for the power measurement systems, Tech. Rep., Ecole SupCrieure dElectricitC, Gifsur-Yvette, France, 1994. Y. Rozen and B. Porat, Optimal ARMA parameter estimation based on the sample covariances for data with missing observations, IEEE Trans. Inform. Theory, vol. 35, pp. 342-349, 1989. E. A. Parzen, On spectral analysis with missing observations and amplitude modulation, Sankhya, vol. 25, serie A, pp. 383-392, 1963. H. Sakai, Fitting auto regression with regularly missed observations, Ann. Inst. Statist. Math., vol. 43, pt. A, pp. 393400, 1980. W. Dunsmir and P. M. Robinson, Asymptotic theory for time series containing missing and amplitude modulated observations, Sunkhyu, vol. 43, series A, pp. 260-281, 1981. Sina Mirsaidi was born in Tehran, Iran, in 1966. He received the B.S degree from the University of Tehran in 1989 and the M.S. degree from Ecole SupCrieure dElectricitC (SUPELEC), Gif-sur-Yvette France. He is currently a research assistant with the Measurement Department at SUPELEC, where he is pursuing the Ph.D. degree. His main research interests are in the area of adaptive signal processing, time series analysis, and data compression. His current interests focus on adaptive processing of nonuniformly sampled signals with applications to communication problems.

Gilles A. Fleury was born in Bordeaux, France, on January 8, 1968. He received the B.S. degree from the &ole SupCrieure dl?lectricitC (SUPELEC), Gifsur-Yvette France, in 1990 and the Ph.D degree in signal processing from the UniversitC de Paris-Sud, Orsay, France, in 1994. He is presently a professor with the Measurement Department of SUPGLEC. After some work on inverse problems and optimal design, his interests moved toward optimal nonlinear modelization and nonuniform sampling signal processing.

Jacques Oksman was born in Toulouse, France, in 1948. He received the engineering degree from the Ecole SupCrieure d6lectricitC (SUPGLEC), Gif-surYvette, France in 1971. Currently, he is Professor at SUPELEC, where he is in charge of the Signal Processing Group in the Measurement Department. His main interests are signal processing for measurement purposes and DSP architecture. He has been working on various research projects involving such topics as parametric modeling of signals or systems for solving inverse problems and nonunifoirm sampling of signals. He teaches courses on digital design, numerical anal)isis, and optimization.

,aC

You might also like