You are on page 1of 7

BIME Journal, Volume (06), Issue (1), Dec.

, 2006

Employing Time-Domain Methods and Poincar Plot of Heart Rate Variability Signals to Detect Congestive Heart Failure
Alia S. Khaled, Mohamed I. Owis, Abdalla S. A. Mohamed Department of Systems and Biomedical Engineering, Faculty of Engineering, Cairo University, El-Gamaa st., Cairo, Egypt

Abstract
Congestive heart failure (CHF) is a common and serious medical condition where the heart is not able to pump enough blood to meet the body's energy demands. Heart failure typically develops slowly after injury to the heart, such as a heart attack, too much strain on the heart due to years of untreated high blood pressure or diseased cardiac valves. In this study, we consider the problem of detection of CHF using heart rate variability (HRV) analysis techniques, which depend on the variations among consecutive heartbeats. The proposed solution consists of feature extraction followed by a classification step. Features are extracted using HRV analysis methods such as time-domain methods and the Poincar Plot. For classification, three statistical classifiers and the backpropagation neural networks (BPNN) are used. Results have shown that these classifiers, with normalized features, are capable of detecting CHF with sensitivity of 97.90% and positive predictive accuracy of 98.19%. Time-domain features are capable of discriminating the normal from CHF signals more than Poincar plot features. Keywords: HRV, CHF, Time-domain Analysis, Poincar Plot, Back-propagation Neural Networks.

1. Introduction
More than 20 million people around the world (nearly 5 million in the United States) have heart failure. Heart failure is the first cause of hospitalization for people aged 65 and older. About 550,000 people in the United States only develop CHF each year with annual mortality of 266,000, and the number of people living with heart failure is growing [1, 2, 3]. Early diagnosis of heart failure is required specially that it is not always apparent. If not diagnosed early, patient will be more liable to sudden cardiac death. Electrocardiogram (ECG) records cannot diagnose heart failure; it is only an indicator of heart problems, so physicians need other tests to diagnose heart failure.

Heart rate variability (HRV), representing heart rate and beat to beat variations, is another signal that can be extracted from ECG. The basic variable in HRV is the time interval between consecutive ECG R-waves, known as RR interval. HRV can be used for automatic detection of heart failure through some analysis techniques. Such techniques work by transforming the mostly qualitative diagnostic information in the ECG into more objective quantitative features that could be subsequently classified. HRV has been the subject of numerous clinical studies investigating a wide spectrum of cardiac and noncardiac diseases and clinical conditions. Examples include myocardial infarction (MI) [4, 5, 6, 7], sudden cardiac death and ventricular arrhythmias [8, 9, 10, 11], Hypertension [12, 13], Diabetes mellitus [14, 15, 16], and Heart transplantation [17, 18, 19]. Many studies analyzed the HRV during CHF. In 1989, Casolo et. al [20] used time-domain analysis of the histogram of the RR interval from 24-hour holter to compare a CHF patient group with a control group. In 1994, Woo et. al [21] used the Poincar plot method, a nonlinear method, to assist the analysis of sympathetic influences. In 1999, Bonaduce et. al [22] designed a study to evaluate the predictive value of HRV and Poincar plots as assessed by 24-hour holter recordings in patients with chronic heart failure. In this work we propose the use of HRV features to detect the presence of CHF. Two of the HRV analysis techniques are considered; time-domain methods which can be statistical or geometrical, and the nonlinear Poincar plot method. The proposed CHF detection technique is composed of two stages: feature extraction and classification. In the second section of this paper we discuss the HRV features selected for analysis. In the third section we describe the different classification techniques used in this study. Results obtained using different feature vectors and different classifiers are presented in section 4. We finally discuss the capabilities of the proposed technique as well as possible modifications.

35

BIME Journal, Volume (06), Issue (1), Dec., 2006

2. Feature Extraction
In this study we consider two methods used for feature extraction; time domain methods and the nonlinear Poincar plot method. Time-Domain Methods: To get the parameters of the time domain either statistical or geometrical methods are used. Using statistical methods, the mean and the standard deviation of the RR interval are calculated. Only RR intervals of sinus beats, Normal-to-Normal or NN intervals, are taken into account [23]. The standard deviation of the NN interval is denoted by SDNN. SDNN reflects all the cyclic components responsible for variability in the period of recording. Other measures can be derived from interval differences such as RMSSD, the root mean square of differences of successive NN intervals, NN50, the number of interval differences of successive NN intervals greater than 50 ms, and pNN50 the proportion derived by dividing NN50 by the total number of NN intervals. All these measurements of short-term variations measure high frequency components in heart rate and are thus highly correlated [24]. Instantaneous heart rate (IHR) is another descriptive signal that reflects the instantaneous change in the heart rate and is equal to the reciprocal of the NN interval. In the present work, short-term HRV signals (25 min) are used. The series of NN intervals can also be converted into a geometric pattern, such as the sample density distribution of NN interval durations. Most geometric methods require the RR (or NN) interval sequence to be measured on or converted to a discrete scale which is not too fine or too coarse and which permits the construction of smoothed histograms. Most experience has been obtained with bins approximately 8 ms long (precisely 7.8125 ms= 1/128 s) which corresponds to the precision of current commercial equipments. Two measures are selected: HRV triangular index which is the integral of the density distribution (i.e. the number of all NN intervals) divided by the maximum of the density distribution. Using a discrete scale, this measure is approximated by: (total number of NN intervals)/ (number of NN intervals in the modal bin). This measure is dependent on the length of the bin, i.e. on the precision of the discrete scale of measurement. The other measure is the triangular interpolation of NN interval histogram (TINN) which is the baseline width of the minimum square difference triangular interpolation of the highest peak of the histogram of all NN intervals [24]. Poincar Plot: Nonlinear phenomena are certainly involved in the genesis of HRV. They are determined by complex interactions of hemodynamic, electrophysiological and humoral variables, as well as by autonomic and central nervous regulations. It has been speculated that analysis of HRV based on the methods of nonlinear dynamics might elicit valuable information for the physiological interpretation of HRV and for the assessment of the risk of sudden death. One nonlinear method, which is used here, is the so called Poincar plot. It is a graphical representation of the correlation between consecutive RR intervals. It is a graph of each RR interval plotted against the next
36

interval. Poincar plot analysis is an emerging quantitative-visual technique whereby the shape of the plot is categorized into functional classes that indicate the degree of heart failure in a subject. The plot provides summary information as well as detailed beat-to-beat information on the behavior of the heart [25]. Poincar plot are most often taken of 5-10 minute intervals, like in this work, or of a 24-h segment. For the relatively short 5-10 minute segments, wide-sense stationarity may be achieved. The RR interval Poincar plot typically appears as an elongated cloud of points oriented along the line-of-identity at 45 to the normal axis as exemplified in Fig. 1. The dispersion of points perpendicular to the line-of-identity reflects the level of short-term variability. The dispersion of points along the line-of-identity is thought to indicate the level of longterm variability [26]. To characterize the shape of the plot mathematically, the plot is usually fitted to an ellipse as shown in Fig. 1. A set of the axes oriented with the line-of-identity are defined. The axes of the Poincar plot are related to the new set of axes by a rotation of = / 4 rad.
x1 x2 cos = sin sin RR n RR cos n +1

(1)

In the reference system of the new axes, the dispersion of the points around the x2 axis is measured by the standard deviation denoted by SD1. This quantity measures the width of the Poincar cloud and therefore indicates the level of short-term HRV. The length of the cloud along the line-of-identity measures the long-term HRV and is measured by SD2 which is the standard deviation around the x1 [25].

Fig. 1. Poincar Plot.

Normalization: A common problem encountered when a large number of features are used is that each feature has generally a different range of values according to its type. The effect of normalization of all parameter values in the feature vector within a fixed range around the zero (e.g., between 1) is studied as a possible convenient preprocessing step for proper weighting of parameters used in classification [27, 28]. The normalized parameter NP is calculated from its original value P using

BIME Journal, Volume (06), Issue (1), Dec., 2006


NP = 2 * ( P min( P )) 1 (max( P ) min( P ))

(2)

Significance Test: Significance test is used in the present work to assess the use of the parameters extracted from HRV analysis for detection of CHF. This is considered a two-sample problem, which is defined as comparing random samples separately selected from the two populations (for the same measured variable in the two samples) by comparing the two population means through testing the hypothesis of no difference, H0: 1 = 2. T-test is considered as the most commonly used method to evaluate the differences in means between two groups. In this work, the t-test is used to test the significance of each feature to be used for detection of CHF. The significance level, P, which defines the sensitivity of the test is set to 0.05.

3. Voting k-Nearest Neighbor (k-NN) classifier: kNN is a non-parametric statistical classifier which assigns a test sample to the class of the majority of its kneighbors. That is, assuming the number of voting neighbors to be

k =

i =1

ki

(3)

where ki is the number of samples from class i in the neighborhood of the test sample and N is the total number of classes. The test sample is assigned to class m if km = max {ki, i=1..N}. Back-propagation Neural Networks (BPNN): In the present work, the resilient back-propagation training algorithm is used. A sigmoid transfer functions is used in the hidden layers. Two neurons are used in both the hidden layer and in the output layer, and the number of neurons in the input layer is varied according to the number of features. The network is trained to output "1" in the output node corresponding to the specified class and to output "0" in the other node. For example, if the input represents a normal signal then the output will be 1 in the first neuron and 0 in the second.

3. Classification
In order to investigate the performance of the proposed features in detecting CHF, a number of the most commonly used classifiers are implemented and used to perform this task. Three statistical classifiers and back-propagation neural network (BPNN) classifier are used. The performance of all classifiers is reported in terms of sensitivity, specificity, positive predictive accuracy, and error rate. These values are standard statistics used to measure the performance of the classification algorithms [29]. Sensitivity measures how well the algorithm can identify CHF signals, while specificity measures how well the algorithm identifies the normal signals. Positive predictive accuracy measures how often the algorithm is correct when it calls a signal CHF, and the error rate is a single value summary of the overall percentage of mistakes made by the algorithm. Statistical Classifiers: Statistical classifiers can be divided into parametric and non-parametric techniques. Parametric statistical pattern recognition uses given or assumed information about the prior probabilities to obtain the classification [30]. On the other hand, the non-parametric approach does not require any a priori information about the probability distributions of the data in each class and classification is performed based on the provided data samples with known class membership. Three statistical classifiers are used: minimum-distance classifier, Bayes minimum-error classifier and voting knearest neighbor (k-NN) classifier [27]. 1. Minimum-Distance Classifier: This method assumes that the classes are similar in distribution and are linearly separable. Hence, the decision lines are allocated halfway between the centers of clusters of different classes. 2. Bayes Minimum-Error Classifier: The Bayes decision rule classifies an observation (i.e., a test sample) to the class that has the higher a posteriori probability among the two classes. In this study, the data set is assumed to have a Gaussian conditional density function and the a priori probabilities are assumed to be equal for the two types (normal and CHF).

4. Results
Data Collection: The HRV signals used in this work are obtained from PhysioBank, a biomedical signals database [30]. The normal signals are obtained from the Normal Sinus Rhythm RR Interval Database and The MIT-BIH Normal Sinus Rhythm Database, while the CHF signals are obtained from Congestive Heart Failure RR Interval Database and The BIDMC Congestive Heart Failure Database [30]. The HRV signals are driven from ECG signals using annotation files. Since in the present work, we use short-term HRV signals (2-5 minutes), they are extracted from the overall of 106 long-term signals (from all databases). The dataset used consists of 600 short-term HRV signals divided into 400 for design (learning) and 200 for test. Each of the learning and the test subsets contain equal number of normal and CHF signals. The same learning and test subsets are used for the different classifiers to neutralize their effect on results [27]. All signal analysis techniques used in this paper are implemented on a PC using custom software developed in Matlab v7.0 (Mathworks, Natick, MA). Feature Extraction: The statistical mean of the computed features for normal and CHF signals are computed. The resultant values are then compared to detect the statistically significant differences among normal and CHF signals. In order to do that, the t-test is used and the P-values are computed as shown in table 1. Classification Results: The results of specificity (spec.), sensitivity (sens.), positive predictive accuracy (+ve pred.) and error rate (err.) after applying the three statistical classifiers and BPNN on the feature vectors before and after normalization are shown in tables 2-11.

37

BIME Journal, Volume (06), Issue (1), Dec., 2006


TABLE 1 P-VALUES OF T-TEST FOR FEATURES BEFORE AND AFTER NORMALIZATION Features Mean RR SDNN RR Mean IHR SDNN IHR RMSSD NN50 pNN50 HRV Triangular index TINN Poincar Plot Measures SD1 SD2 Before Normalization 7.56e-22 0.316 2.86e-30 9.92e-07 0.192 3.57e-03 3.57e-03 3.94e-06 0.2703 0.192 0.413 After Normalization 6.33e-39 0.569 0.151 0.088 0.865 0.0125 0.0125 2.14e-15 0.862 0.865 0.381

TimeDomain Measures

TABLE 2 MINIMUM-DISTANCE CLASSIFIER TIME-DOMAIN FEATURES Before Normalization After Normalization Sensitivity 82.30% 21.50% Specificity 20.20% 29.50% Positive 50.77% 23.37% Predictive Accuracy Error Rate 48.75% 74.5% TABLE 4 BAYES MINIMUM-ERROR CLASSIFIER TIME-DOMAIN FEATURES Before Normalization After Normalization Sensitivity 26.00% 39.70% Specificity 38.60% 38.50% Positive 29.74% 39.23% Predictive Accuracy Error Rate 67.70% 60.90%

TABLE 3 MINIMUM-DISTANCE CLASSIFIER POINCARE PLOT FEATURES Before Normalization After Normalization Sensitivity 83.00% 47.60% Specificity 18.60% 47.90% Positive 50.48% 47.74% Predictive Accuracy Error Rate 49.20% 52.25% TABLE 5 BAYES MINIMUM-ERROR CLASSIFIER POINCARE PLOT FEATURES Before Normalization After Normalization Sensitivity 32.80% 79.40% Specificity 91.70% 60.40% Positive 79.81% 66.72% Predictive Accuracy Error Rate 37.75% 30.10% TABLE 7 VOTING K-NN CLASSIFIER INCONCLUSIVE RATES TIME-DOMAIN FEATURES Before Normalization After Normalization k Normal CHF Normal CHF 1 0.00% 0.00% 0.00% 0.00% 2 38.30% 37.10% 11.60% 10.70% 3 0.00% 0.00% 0.00% 0.00% 4 24.60% 26.10% 6.00% 7.00% 5 0.00% 0.00% 0.00% 0.00% 6 17.90% 24.20% 5.10% 7.50% 7 0.00% 0.00% 0.00% 0.00% 8 16.60% 19.50% 3.60% 5.00% 9 0.00% 0.00% 0.00% 0.00% TABLE 9 VOTING K-NN CLASSIFIER INCONCLUSIVE RATES POINCARE PLOT FEATURES Before Normalization After Normalization k Normal CHF Normal CHF 1 0.00% 0.00% 0.00% 0.00% 2 33.20% 28.50% 34.60% 30.10% 3 0.00% 0.00% 0.00% 0.00% 4 18.40% 21.70% 21.70% 19.20% 5 0.00% 0.00% 0.00% 0.00% 6 13.90% 14.20% 15.40% 14.50% 7 0.00% 0.00% 0.00% 0.00% 8 9.80% 11.30% 12.10% 9.90% 9 0.00% 0.00% 0.00% 0.00%

k 1 2 3 4 5 6 7 8 9

Sens. 65.80% 84.00% 68.00% 80.00% 68.80% 76.80% 68.30% 76.80% 70.00%

TABLE 6 VOTING K-NN CLASSIFIER TIME-DOMAIN FEATURES Before Normalization After Normalization Spec. +ve Pred. Err. Sens. Spec. +ve Pred. 67.20% 66.27% 33.50% 92.80% 95.70% 93.00% 83.30% 83.89% 16.35% 97.40% 98.90% 97.44% 65.70% 67.25% 33.15% 94.50% 94.00% 94.47% 77.90% 79.57% 21.05% 96.70% 96.60% 96.70% 66.10% 67.93% 32.55% 94.40% 93.40% 94.34% 77.70% 77.01% 22.75% 96.50% 96.30% 96.49% 65.70% 67.45% 33.00% 93.90% 93.00% 93.84% 73.60% 76.03% 24.80% 95.00% 94.90% 94.99% 64.80% 68.35% 32.60% 93.20% 92.20% 93.13% TABLE 8 VOTING K-NN CLASSIFIER POINCARE PLOT FEATURES Before Normalization After Normalization +ve Spec. Err. Sens. Spec. +ve Pred. Pred. 66.80% 68.37% 32.05% 70.40% 66.90% 69.33% 79.60% 84.68% 17.40% 85.50% 81.90% 84.96% 65.40% 71.48% 30.35% 72.80% 69.80% 71.96% 77.40% 81.82% 19.90% 81.90% 79.40% 81.44% 70.00% 73.76% 27.45% 73.20% 70.40% 72.43% 76.40% 81.02% 20.75% 80.40% 78.80% 80.08% 71.10% 74.84% 26.40% 73.90% 72.50% 73.53% 76.00% 80.17% 21.40% 79.50% 77.00% 78.97% 70.50% 75.97% 25.90% 73.60% 72.10% 73.20%

Err. 5.75% 1.85% 5.75% 3.35% 6.10% 3.60% 6.55% 5.05% 7.30%

k 1 2 3 4 5 6 7 8 9

Sens. 69.10% 85.60% 73.90% 82.80% 75.10% 82.10% 76.10% 81.20% 77.70%

Err. 31.35% 16.30% 28.70% 19.35% 28.20% 20.40% 26.80% 21.75% 27.15%

38

BIME Journal, Volume (06), Issue (1), Dec., 2006


TABLE 10 BACKPROPAGATION NEURAL NETWORKS TIME-DOMAIN FEATURES Before Normalization After Normalization Sensitivity 67.00% 97.90% Specificity 67.00% 98.20% Positive 98.19% 69.43% Predictive Accuracy Error Rate 31.25% 01.95% TABLE 11 BACKPROPAGATION NEURAL NETWORKS POINCARE PLOT FEATURES Before Normalization After Normalization Sensitivity 66.30% 71.70% Specificity 75.90% 68.90% Positive 73.34% 69.74% Predictive Accuracy Error Rate 28.90% 29.70%

5. Discussion
Table 1 shows that normal signals can be statistically differentiated from CHF signals by some statistical features such as Mean RR, Mean IHR, NN50 and pNN50. While the TINN geometric feature and nonlinear features measured by Poincar plot have no statistically significant difference between normal and CHF signals. This can be explained by the need for a reasonable number of NN intervals to construct the geometric pattern. In practice, recordings of at least 20 min should be used to ensure the correct performance of the geometric methods while in the present study we use short-term recordings of about 5 min. Brennan has shown that Poincar plot features, i.e. SD1 and SD2, are related to linear indexes of HRV, specifically the SDNN (the standard deviation of the RR interval) and the SDSD (the standard deviation of the successive differences) [25]. Since, it is obvious from table 1 that SDNN as a statistical feature has no statistically significant difference between normal and CHF signals, the same is also true for the Poincar plot features in turn. It is obvious also from tables 2 to 11 that normalizing the feature vector improves the overall classification accuracy, indicating its importance as a pre-processing step. The results of applying minimum-distance classifier, tables 2 and 3, show the worst performance. Sensitivity is 82.30% and 21.50% before and after normalization of time-domain features, respectively, while the positive predictive accuracy is, respectively, 50.77% and 23.37%. This method assumes that clusters can be linearly separable which is not the case here especially after normalization of the feature vector where the range of data would be small and/or overlapping. From that we conclude that this method is not suitable for the current application. The results of applying Bayes minimumerror classifier, tables 4 and 5, are not satisfactory as well. Their maximum sensitivity value is 79.4% in case of Poincar plot features with positive predictive accuracy of 60.40%. This could be related to improper initial assumptions of probability distributions. Voting k-NN classifier, tables 6, 7, 8 and 9, and BPNN, tables 10 and 11, show a high level of accuracy in the case of the timedomain features. Sensitivity reaches 97.90% and positive predictive accuracy is more than 98%. This superior performance can be explained by the inherent independence of these techniques from the data distribution by being sample-based. The classification results agree with the t-test results in that features which have statistically significant difference between normal and CHF signals give high efficiency with suitable classification techniques.
39

6. Conclusion
Two methods for detecting CHF using HRV analysis are presented. Two stages are required in order to detect CHF; feature extraction and classification. In the feature extraction stage, the features are extracted using timedomain methods, and nonlinear method (Poincar Plot). These features are normalized before entering to classification stage. The classification takes place using three statistical classifiers: Minimum-Distance Classifier, Bayes Minimum-Error Classifier and Voting k-Nearest Neighbor Classifier, and Back propagation Neural Networks. Results show that a good promise for an automatic detection of CHF signals using voting k-NN classifier and BPNN with positive predictive accuracy of 98.19%. It also shows that the Poincar plot features are not capable of detecting CHF using any of the classifiers. Other nonlinear features such as Lyapunov exponent and correlation dimension may be better alternatives that need to be explored. It is concluded that time-domain features are capable of representing the normal and CHF signals more than the Poincar plot feature. In addition, normalization of the feature vector before classification has a great effect in improving the detection accuracy.

7. References
[1] Congestive heart failure worldwide markets, clinical status and product development opportunities. New Medicine, Inc. 1-40, 1997. [2] American Heart Association, Heart and Stroke Statistical Update, 2001 [3] Ho KK, Pinsky JL, Kannel WB, Levy D, The epidemiology of heart failure: the Framingham Study, J Am Coll Cardiol., Oct; 22(4 Suppl A):6A13A, 1993. [4] Kleiger RE, Miller JP, Bigger JT Jr, Moss AJ, Decreased heart rate variability and its association with increased mortality after acute myocardial infarction, Am J Cardiol, Feb 1; 59(4):256-62, 1987. [5] Casolo GC, Stroder P, Signorini C, Calzolari F, Zucchini M, Balli E, Sulla A, Lazzerini S., Heart rate variability during the acute phase of myocardial infarction, Circulation, Jun; 85(6):2073-9, 1992. [6] Stein PK, Domitrovich PP, Kleiger RE, Schechtman KB, Rottman JN, Clinical and demographic determinants of heart rate variability in patients post myocardial infarction: insights from the cardiac arrhythmia suppression trial (CAST), Clin Cardiol, Mar; 23(3):187-94, 2000. [7] Narendra Singh, Dmitry Mironov, Paul W. Armstrong, Allan M. Ross, and Anatoly Langer, Heart Rate Variability Assessment Early After

BIME Journal, Volume (06), Issue (1), Dec., 2006

Acute Myocardial Infarction: Pathophysiological and Prognostic Correlates, Circulation, 93:1388-1395, 1996. [8] Hisako Tsuji, Martin G. Larson, Ferdinand J. Venditti, Jr, Emily S. Manders, Jane C. Evans, Charles L. Feldman, Daniel Levy, Impact of Reduced Heart Rate Variability on Risk for Cardiac Events, The Framingham Heart Study Circulation ,94:2850-2855, 1996. [9] P.K. Stein, R.E. Kleiger, Insights from the study of Heart Rate Variability, Annu. Rev. Med., 50:249261, 1999. [10] Dekker JM, Crow RS, Folsom AR, Hannan PJ, Liao D, Swenne CA, Schouten EG, Low heart rate variability in a 2-minute rhythm strip predicts risk of coronary heart disease and mortality from several causes: the ARIC Study. Atherosclerosis Risk In Communities, Circulation, Sep 12; 102(11):123944, 2000. [11] Malliani A, Lombardi F, Pagani M, Cerutti S, Power spectral analysis of cardiovascular variability in patients at risk for sudden cardiac death, J Cardiovasc Electrophysiol., Mar; 5(3):274-86, 1994. [12] Mussalo H, Vanninen E, Ikaheimo R, Laitinen T, Laakso M, Lansimies E, Hartikainen J, Heart rate variability and its determinants in patients with severe or mild essential hypertension, Clin Physiol., 21(5):594-604, 2001. [13] Raymond B, Taverner D, Nandagopal D, Mazumdar J. Australas, Classification of heart rate variability in patients with mild hypertension, Australas Phys Eng Sci Med. Dec; 20(4):207-13, 1997. [14] Risk M, Bril V, Broadbridge C, Cohen A, Heart rate variability measurement in diabetic neuropathy: review of methods, Diabetes Technol Ther, Spring; 3(1):63-76, 2001. [15] Pagani M, Heart rate variability and autonomic diabetic neuropathy, Diabetes Nutr Metab, Dec; 13(6):341-6, 2000. [16] Lanting P, Faes TJ, Heimans JJ, ten Voorde BJ, Nauta JJ, Rompelman O, Spectral analysis of spontaneous heart rate variation in diabetic patients, Diabet Med, Sep-Oct; 7(8):705-10, 1990. [17] KE Sands, ML Appel, LS Lilly, FJ Schoen, GH Mudge Jr, RJ Cohen, Power spectrum analysis of heart rate variability in human cardiac transplant recipients, Circulation , 79(1):76-82, 1989. [18] Ramaekers D, Ector H, Vanhaecke J, van Cleemput J, van de Werf F, Heart rate variability after cardiac transplantation in humans, Pacing Clin Electrophysiol, Dec; 19(12 Pt 1):2112-9, 1996. [19] Binder T, Frey B, Porenta G, Heinz G, Wutte M, Kreiner G, Gossinger H, Schmidinger H, Pacher R, Weber H, Prognostic value of heart rate variability in patients awaiting cardiac transplantation, Pacing Clin Electrophysiol, Nov; 15(11 Pt 2):2215-20, 1992 [20] Casolo G, Balli E, Taddei T, Amuhasi J, Gori C, Decreased spontaneous heart rate variability on congestive heart failure, Am J Cardiol.,64:11621167, 1989. [21] Woo MA, Stevenson WG, Moser DK, Middlekauff HR, Complex heart rate variability and serum norepinephrine levels in patients with advanced heart
40

failure, J Am Coll Cardiol., Mar 1; 23(3):565-9, 1994. [22] Bonaduce D, Petretta M, Marciano F, Vicario ML, Apicella C, Rao MA, Nicolai E, Volpe M, Independent and incremental prognostic value of heart rate variability in patients with chronic heart failure, Am Heart J, Aug; 138(2 Pt 1):273-84, 1999. [23] Aubert A., Ramaekers D., Becjers F., Breem R., Ector H., Van de Werf F., Time and frequency analysis of heart rate variability Pitfalls and misinterpretations, Monduzz: Editore. Bologna, Italy. 323-327, 1998. [24] Marek Malik, Heart Rate Variability: Standards of Measurement, Physiological Interpretation, and Clinical Use, Task Force of the European Society of Cardiology the North American Society of Pacing Electrophysiology, American Heart Association, Circulation 93: 1043-1065, 1996 [25] M. Brennan, M. Palaniswami, P. Kamen, Do Existing Measures of Poincar Plot Geometry Reflect Nonlinear Features of Heart Rate Variability?, IEEE Trans Biomed Eng, Nov; 48(11):1342-1347, 2001. [26] P. W. Kamen, Heart Rate Variability, Aust. Family Physician, 25:1087 1094, 1996. [27] Yasser M. Kadah, Aly A. Farag, Jacek M. Zurada, Ahmed M. Badawi, Abou-Bakr M. Youssef, Classification Algorithms for Quantitative Tissue Characterization of Diffuse Liver Disease from Ultrasound Images, IEEE Transactions on Medical Imaging, Aug; 15(4):466-478, 1996. [28] H. L. van Trees, Detection, Estimation, and Modulation Theory, Pl. I. New York: Wiley, 1968. [29] Brian Young, Don Brodnick, and Randy Spaulding, A Comparative Study of a Hidden Markov Model Detector For Atrial Fibrillation, Neural Networks for Signal Processing IX, 1999. Proceedings of the 1999 IEEE Signal Processing Society Workshop, Aug 23-25; 468 476, 1999. [30] PhsioBank, Physiologic Signal Archives for Biomedical Research, http://www.physionet.org (Accessed December 2006)

Alia S. Khaled got her B.Sc and M.Sc degrees in biomedical engineering from Cairo University in 2003 and 2006 respectively. She is currently a teaching assistant in the department of Systems and Biomedical Engineering, Cairo University since 2003. Her research interests include medical instrumentation, biomedical signals processing, medical imaging, and pattern recognition.

BIME Journal, Volume (06), Issue (1), Dec., 2006

Mohamed I. Owis, is currently an Assistant Professor in the department of Systems and Biomedical Engineering, Faculty of Engineering, Cairo University , Egypt . He obtained M.Eng. and PhD in Systems and Biomedical Engineering, Cairo University 1996 and 2002 respectively. He received M.Sc. in Computer Science, Old Dominion University , USA 1999. Dr. Owis received B.Sc. in Systems and Biomedical Engineering, Faculty of Engineering, Cairo University , 1992. His research interests include: pattern classification, signal processing, data mining and artificial intelligence.

Abdalla S. A. Mohamed is Ph.D. in Systems and Biomedical Engineering (1982). He got B.Sc. in Electrical Engineering (1970), M.Sc. in Computer Science (1974), M.Sc. in Computer Engineering (1978). Dr. Mohamed joined Cairo University as Assistant Professor in 1983 where he is currently a professor and chairman of Systems and Biomedical Engineering Department. In 1984, he was awarded a scholarship from NIH, Maryland, for Medical Instrumentation Prototyping. During 1985-1986, he joined group of Systems Engineering, University of Waterloo, Canada. He was Assistant Professor in Department of Computer Science, University of Regina, Canada (1986-1989), and Associate Professor in Faculty of Medicine & Health Science, University of United Arab Emirates (1991-1993). His research interests are concentrated in the fields of Biological Signals Processing, Medical Images Analysis, Pattern Recognition, and Medical Decision Making.

41

You might also like