You are on page 1of 6

A Method for Optimizing and Validating InstitutionSpecific Flagging Criteria for Automated Cell Counters

Anthony Sireci, MD; Robert Schlaberg, MD, MPH; Alexander Kratz, MD, PhD, MPH

counters to N Context.Automated cellwhite blooduse alerts (flags) be indicate which differential cell counts can released directly from the instrument and which samples require labor-intensive slide reviews. The thresholds at which many of these flags are triggered can be adjusted by individual laboratories. Many users, however, use factorydefault settings or adjust the thresholds through a process of trial and error. Objective.To develop a systematic method, combining statistical analysis and clinical judgment, to optimize the flagging thresholds on automated cell counters. Design.Data from 502 samples flagged by Sysmex XE2100/5000 (Sysmex, Kobe, Japan) instruments, with at least 1 of 5 user-adjustable, white blood cell count flags, were used to change the flagging thresholds for maximal diagnostic effectiveness by optimizing the Youden index and The accuratecrucialtimely delivery ofbydifferential white blood cell (WBC) count results the hematology laboratory is in many clinical settings, including acute infections, hematologic malignancies, and the administration of chemotherapy. Automated cell counters were introduced by Wallace Coulter in 1953 and have since replaced the microscope as the instrument of choice for most peripheral, differential WBC counts. The advantages of automated cell counters over the microscope include faster turnaround times, significantly lower labor costs, lack of interobserver variation, and results with greater statistical validity.1 Modern automated cell counters provide a reliable differential WBC count for samples that are within reference range and for those that exhibit only quantitative abnormalities. Qualitatively abnormal sample results, such as those with abnormal or immature cells, still require the preparation of a slide and microscopic analysis.2 The instruments use flags (electronic or printed alerts) to notify the user that the automated differential WBC count may not be correct and requires review. The
Accepted for publication January 25, 2010. From the Department of Pathology, Columbia University College of Physicians and Surgeons, New York, New York; and the Clinical Laboratory Service, NewYorkPresbyterian Hospital, New York. Dr Schlaberg is now with the Department of Pathology, University of Utah, Salt Lake City. The authors have no relevant financial interest in the products or companies described in this article. Reprints: Alexander Kratz, MD, PhD, MPH, Columbia University Medical Center, Core Laboratory, 622 W 168th St, PH3-363, New York, NY 10032 (e-mail: ak2651@columbia.edu). 1528 Arch Pathol Lab MedVol 134, October 2010

for each flag (the optimization set). The optimized thresholds were then validated with a second set of 378 samples (the validation set). Results.Use of the new thresholds reduced the review rate caused by the 5 flags from 6.5% to 2.9% and improved the positive predictive value of the flagging system for any abnormality from 27% to 37%. Conclusions.This method can be used to optimize thresholds for flag alerts on automated cell counters of any type and to improve the overall positive predictive value of the flagging system at the expense of a reduction in the negative predictive value. A reduced manual review rate helps to focus resources on differential white blood cell counts that are of clinical significance and may improve turnaround time. (Arch Pathol Lab Med. 2010;134:15281533) factors that trigger these flags vary with the underlying technology of the cell counter; in most cases, the flags have some level of specificity for the presence of certain abnormal findings. For example, instruments that use light scatter and fluorescence will flag samples that contain cell populations in certain areas of their scattergrams with a blast flag. The presence of one or more flags does not indicate that a specific abnormality or any abnormality has to be present; it only indicates that there is an increased probability of an abnormality that can only be excluded or proven by slide review. The presence of a flag on a sample, therefore, usually prompts the laboratory to prepare and review a slide with a microscope or a digital imaging device.3 If no qualitative WBC abnormalities are identified on the slide and the quantitative distribution of the normal cell populations on the slide corresponds to the automated differential, the results from the instrument may be released. Otherwise, a full microscopic differential WBC count may be indicated. The percentage of differential WBC count samples that are flagged by an automated cell counter and submitted for microscopic review is known as the review rate. Factors that influence the review rate include the patient population served (eg, the review rate will be higher in patients with hematologic or oncologic disease because of the immature or abnormal WBCs often observed in these patients), the type of automated cell counter, and the settings of the flagging thresholds of the automated cell counter. The latter variable is most easily influenced by individual laboratories. Most automated cell counters are installed with factoryset or factory-recommended settings for the thresholds. It
Optimizing Flagging ThresholdsSireci et al

is then up to the individual laboratories to adjust the thresholds to the clinical needs of their patients and clinical staff. These adjustments involve a careful balancing of the potential risk of missing any abnormal cells (which favors thresholds set at very low values) and the increase in turnaround time, manual labor, and cost caused by an increase in the review rate (which favors thresholds at high values). In many cases, laboratories opt either to use the factory-set defaults or to make adjustments over time by a process of trial and error without a clear understanding of the exact consequences of the adjustments. On the Sysmex XE-2100 and XE-5000 (hereafter, the XE2100/5000; Sysmex, Kobe, Japan) analyzers, more than 30 different flags are used to indicate the possible presence of qualitative and quantitative abnormalities of red blood cells, WBCs, or platelets in a sample. Our study investigated 5 of those flags that indicate the possible presence of abnormal WBCs and that have user-adjustable thresholds.4,5 Using those 5 flags as a model system, we developed a systematic method for optimizing the thresholds at which the cell counters flags are triggered. MATERIALS AND METHODS The Sysmex XE-2100/5000 Line of Automated Cell Counters
The Sysmex XE-2100/5000 line of automated cell counters establishes a 5-part differential WBC count using an optical technique that includes forward scatter, side scatter, and side fluorescence as well as a method of electric impedance.6,7 An upgraded software package (XE-IG Master, Sysmex) allows users to obtain an extended differential WBC count with the addition of an immature granulocyte count, consisting of promyelocytes, myelocytes, and metamyelocytes, to the standard 5-part differential test.8,9 The 5 WBC-specific flags with user-definable thresholds that we used as our model system are called (1) blasts, (2) immature granulocytes, (3) left shift, (4) atypical lymphocytes, and (5) abnormal lymphocytes/lymphoblasts flags. These flags are generated by patterns in the scattergrams that are typical for certain abnormalities. We did not study WBC count flags whose triggering thresholds were not adjustable by the user or flags that indicate possible red blood cell or platelet abnormalities.

Table 1. Abnormal Findings Qualifying as TruePositives on Manual Differential White Blood Cell Counts
Abnormality Threshold, %

Blasts, plasma cells, hypersegmented neutrophils (.6 lobes) Metamyelocytes and/or myelocytes Atypical lymphocytes Band forms Nucleated red blood cell counts

.1 .3 .5 .7 .1

Data are modified from the recommendations of the International Consensus Group for Hematology Review.11

criteria was met, and a microscopic differential was necessary) or false-positives (none of the criteria was met, a microscopic differential was not necessary, and the automated differential WBC count result was released).

Study Samples
The study was performed in the Core Laboratory of the Columbia University Medical Center campus of the New YorkPresbyterian Hospital (New York), a tertiary care, academic medical center serving a large inpatient and outpatient population, including adult and pediatric hematology-oncology services. The study samples were routine patient specimens that had a differential WBC count ordered by the clinician and a microscopic slide reviewed because one or more of the 5 flags of interest was triggered at the factory default settings.

Statistical Analysis
Statistical analysis was performed using Excel software (Microsoft, Redmond, Washington). Receiver operating characteristic curves were constructed using Stata 10.0 (StataCorp, College Station, Texas) software. The optimal settings for the flagging thresholds were described using positive predictive values (PPVs) and efficiency, and they were derived using the maximized Youden index (YI). The PPV was defined as PPV~True Positives=All Positives The YI is a function of both sensitivity and specificity and is used as a summary of the diagnostic effectiveness of an assay at various cutoffs.12 The threshold for an assay when the YI is maximized, therefore, represents the best performance profile of a test, which is the largest vertical distance from the diagonal to the receiver operating characteristic curve. The YI can be seen as a simplified measure of area under the receiver operating characteristic curve and a method of minimizing regret in medical decision making.13,14 The YI assumes a value between 0 and 1, with 1 representing the most effective cutoff, and is calculated with the equation YI~SensitivityzSpecificity{1 The maximized YI was used to select the optimized thresholds of the 5 flags, both for their specific abnormal findings and for the presence of any abnormalities in the WBC counts. Efficiency is defined as Efficiency~True-PositiveszTrue-Negatives=All Cases Efficiency can best be understood as the proportion of samples that an assay (or threshold) correctly classifies as disease or nondisease. Efficiency is the probability of disease given a positive test result and no disease given a negative test result. As such, efficiency is a useful measure to compare 2 assays or various thresholds.15 Study Design.Optimization Set.A group of 502 specimens that had been flagged by one or more of the 5 flags of interest when thresholds were set at factory-default levels was used as the
Optimizing Flagging ThresholdsSireci et al 1529

Manual Differential WBC Counts


Blood smears were prepared using the SP-1000i slidemakerstainer (Sysmex; adult samples) or using the push-pull method with a spreader slide (pediatric samples) and stained with Wright-Giemsa. Manual differential WBC counts using the standard microscopic technique followed the laboratorys standard operating procedure, which is based on the Clinical and Laboratory Standards Institute guidelines.10 One hundred WBCs were counted for each manual differential WBC count test. In our laboratory, slides prepared from flagged samples are scanned by technologists for the presence of one or more abnormal cells populations, as summarized in Table 1. These criteria are based on the recommendations by the International Consensus Group for Hematology Review and modified based on the clinical needs of our institution.11 The modifications were based on an informal survey of the practices of other laboratories, a review of the literature, consultation with the clinical staff, and the experience of our laboratory leadership. If no abnormal cell populations are found and the instruments differential results appear representative of the relative distribution of cells on the slide, the automated differential WBC count results are released; if one or more of the conditions listed in Table 1 is present, a full microscopic differential WBC count is performed. We used these criteria to identify cases as true-positives (one or more of the
Arch Pathol Lab MedVol 134, October 2010

A representative receiver operating characteristic (ROC) curve demonstrating the process of optimizing the flagging thresholds by maximizing the Youden Index (YI). The ROC curve of the abnormal lymphocyte/ lymphoblast flag for the detection of the abnormalities for which it is named (atypical lymphocytes and/or lymphoblasts) is shown; each point on the curve represents a different threshold for the flag. The area under the curve is 0.6108. At the factory-default threshold of 99, the sensitivity and specificity of the flag for its abnormality are 40% and 68.4%, respectively, with a resultant YI of 0.084. The highest YI is achieved at a cutoff of 200 with a sensitivity of 30% and a specificity of 95.34%. The point of maximized YI is also equivalent to the point on the curve with the largest perpendicular distance from the diagonal. This ROC curve is derived from the data collected and, therefore, does not include samples that were negative for all flags at factory settings (as at least one flag had to be triggered for the sample to be included in the study).

optimization set. The numeric value of each of the 5 differential WBC count-specific flag variables was extracted from the automated cell counter and incorporated into an Excel spreadsheet. Optimization began by raising the cutoff of each flag from the factory default setting in increments of 10 units and calculating the YI of each level for the identification of the specific abnormality denoted by the flag (eg, the blast flag for blasts). The threshold yielding the highest YI was chosen for each flag. The Figure depicts graphically the process of choosing a cutoff that maximizes the YI using the receiver operating characteristic curve of the abnormal lymphocyte/lymphoblast flag for its abnormality (atypical lymphocytes or lymphoblasts). Note that although sensitivity of the flag decreases with optimization, the maximized YI represents the optimal balance between truepositives and false-positives. We next varied the threshold of each flag from the previously optimized point, in 10-unit increments, to find the threshold corresponding to the highest YI for each flag in the presence of any abnormality. The 5 flags were adjusted in the following order: blasts, abnormal lymphocytes/lymphoblasts, immature granulocytes,atypicallymphocytes,and left shift. In addition to maximizing the YI, we also used clinical judgment in this step. For example, we made the clinicaldecision that we were willing to accept a significant number of missed bands and some missed myelocytes and metamyelocytes, but we would not tolerate missing a single blast. Validation Set.We applied the optimized flagging thresholds derived from the optimization set to a new, separate set of 378 samples that had been flagged by at least one of the 5 flags set at factory-default levels. The PPV, the efficiency, and the resultant review rate were calculated for the validation set using the optimized criteria. Additionally, all cases that would have been missed by our new criteria were enumerated and followed up for evaluation. Additional Blast Cases.To increase the number of cases with actual leukemic blasts in the differential WBC count tests, we collected additional samples, flagged at factory-set criteria, which were ultimately confirmed by manual differential tests to harbor more than 1% blasts. A total of 14 cases were recruited, and the values of each of the 5 flags were recorded to assess whether the case would have been detected by optimized criteria.

blasts) and 33% (PPV of the immature granulocyte flag for the presence of myelocytes and/or metamyelocytes) (Table 2). When we considered the PPV of each flag for any abnormality, the PPV ranged from 8.6% (PPV of the abnormal lymphocyte/lymphoblast flag for any WBC abnormal finding on manual differential) to 64% (PPV for the blast flag for any WBC abnormal finding). The combined PPV for any abnormal finding of any one or more of the 5 flags at factory-default thresholds was 23%. Sensitivity and the negative predictive values could not be assessed due to lack of cases negative at factory-default settings (a consequence of our sampling scheme). Optimized Thresholds The thresholds for the 5 flags were optimized for detection of their specific abnormalities, as well as for detection of any abnormality, as described in Materials and Methods, to the values shown in Table 3. The results of optimization on the abnormality-specific PPV of each flag and the PPV of each flag for any abnormality are given in Table 2 for comparison to factory-default thresholds. The abnormality-specific PPV of all but the atypical lymphocyte flag was improved, whereas all overall PPVs were improved by optimization. The overall PPV of the 5 flags for any abnormality increased to 31%, and the efficiency of flagging overall was 52%. Additionally, the flagging efficacies of each flag for its specific abnormality and for any abnormality were calculated, and these data are also summarized in Table 2. Validation of the Optimized Settings A second, independent set of 378 samples was used to validate the optimized settings. Application of the optimized thresholds instead of the factory-default settings to these samples would have reduced the number of false-positive samples from 275 to 106 (Table 4). The overall PPV of the 5 flags for any abnormality in the differential white blood cell count increased from 27% at factory-default settings to 37% with the optimized thresholds.
Optimizing Flagging ThresholdsSireci et al

RESULTS Performance of the 5 Flags at Factory Default Settings The abnormality-specific PPV of each of the 5 flags was between 5.4% (PPV of the blast flag for the presence of
1530 Arch Pathol Lab MedVol 134, October 2010

Table 2.

Number of False-Positive Samples and the Positive Predictive Values (PPVs) of Each Flag for Its Specific Abnormal Finding (Optimization Set, n = 502)
Immature Granulocyte Flag Atypical Lymphocyte Flag Abnormal Lymphocyte/ Lymphoblast Flag

Category

Blast Flag

Left Shift Flag

Factory-set thresholds False-positives, No. Abnormality-specific PPV, % Overall PPV, % Optimized thresholds False-positives, No. Abnormality-specific PPV, % Abnormality-specific efficiency, % Overall PPV, % Overall efficiency, %

53 5.4 64 35 7.9 93 71 80

163 24 37 111 29 76 45 74

84 33 45 44 41 85 52 77

72 13 19 34 11 89 21 72

149 7.4 8.6 22 29 91 29 74

Effects of the Adjustment of the Flagging Thresholds on the Review Rate The review rate in the validation set for the 5 flags studied was 6.5% of all complete blood cell counts with differentials when the factory-default thresholds were used. When the optimized thresholds were applied, the review rate for the 5 flags of interest dropped to 2.9%. These data are summarized in Table 4. Clinical Effect of False-Negative Cases The breakdown of abnormalities observed in the manual differential WBC counts of the validation set (n 5 378) was 4 samples (1.1%) with blasts, 48 samples (12.7%) with more than 3% myelocytes and/or metamyelocytes, 51 samples (13.5%) with more than 5% atypical lymphocytes, and 50 samples (13.2%) with more than 7% bands. Many of these samples had more than one flag; 103 different samples (27.2%) had one or more flags. Use of the optimized thresholds resulted in 41 samples (10.8%) in the validation set that were flagged by the factory-default settings and truly harbored significant pathology but were missed by the optimized thresholds (false-negatives). No cases of blasts were missed. The abnormalities present in these cases are summarized in Table 5. Additional Blast Cases Fourteen additional cases flagged at factory-default settings and proven, by manual differential WBC count, to harbor blast cells were analyzed. All 14 cases were detected by our optimized criteria; 12 (86%) by the blast flag and 2 (14%) by a combination of the other 4 userTable 3. Flag Thresholds for Factory-Default and Optimized Settings
Threshold, No. Flags Factory Default Optimized

adjustable flags. Optimization did not result in any cases of missed blasts. DISCUSSION In this study, we have described a method for optimizing flagging thresholds on an automated cell counter that allows laboratories to safely reduce the number of differential WBC counts that require preparation of a slide (review rate). Through a combination of a maximized YI and clinical judgment, we were able to adjust the thresholds of 5 flags on the Sysmex XE-2100/ 5000 line of hematology analyzers and reduce the review rate from those 5 flags from 6.5% to 2.9%. Overall, the optimized thresholds resulted in an improved PPV of each flag for either its particular abnormality or for any abnormality in the differential WBC count. The exception to this was the atypical lymphocyte flag whose PPV decreased slightly in the optimization process. The generally low PPV of the blast flags (blasts and abnormal lymphocytes/lymphoblasts) for their specific abnormalities in both factory-default and optimized settings is necessary, given the clinical need to detect all cases of blasts, requiring a relatively nonspecific flagging. Interestingly, although each flag is generally a poor predictor of its specific abnormality (ie, the abnormality after which it is named), the flags (with the exception of the atypical lymphocyte flag) have good PPVs for detection of any abnormality. Comparisons of the efficiency rates of flagging reported in the literature on automated cell counters are inherently difficult because of different definitions of clinically significant abnormalities and true-negatives and the use of different types and models of automated cells counters in different patient populations. With these limitations, the efficiency rates of the differential WBC count flags in our study are very similar to those reported by Lacombe and colleagues16 for the Cobas Argos 5 Diff (ABX/Roche Hematology Division, Montpellier, France) and the Technicon H2 (Technicon Instruments, Tarrytown, New York), and by Ruzicka and coworkers17 for Sysmex XE2100 instruments. Previous studies18,19 have shown that flagging sensitivity is dependent on the total WBC count, with a lower sensitivity in leukopenic samples and a lower specificity in samples with WBC counts greater than 10 000/mL. However, other studies17 have shown only a mild effect of
Optimizing Flagging ThresholdsSireci et al 1531

Blast Immature granulocyte Left shift Atypical lymphocyte Abnormal lymphocyte/ lymphoblast
a

99 159a 99 99 99

200 250 200 150 200

Factory default setting for the immature granulocyte flag is 99. However, as a result of previous adjustments by our laboratory, the immature granulocyte flag threshold was set at 159 at the time the study was initiated. Arch Pathol Lab MedVol 134, October 2010

Table 4.
Threshold

False-Positives, Positive Predictive Values (PPV), Review Rate, and Efficiency of the 5 Flags in the Validation Set (n = 378)
False-Positive, No. Overall PPV, % Review Rate, % Overall Efficiency, %

Factory default Optimized Abbreviation: NA, not available.

275 106

27 37

6.5 2.9

NA 61

WBC count on overall efficiency. We did not examine this potential confounder. Although the adjusted thresholds afforded a decrease in the number of false-positive flags, there were cases with abnormalities that were missed by our new criteria, which would have been flagged by factory-default settings. Most of the missed cases had either more than 5% atypical lymphocytes (n 5 26) or more than 7% bands (n 5 9). Studies indicate that the band count is a nonspecific, inaccurate, and imprecise laboratory test with a review of the literature providing little support for clinical utility of the band count in patients over 3 months of age. 20(p101) As our laboratory routinely prepares slides for all newborns, we were not concerned that underreporting of band forms because of changes in our flagging thresholds would have an adverse clinical effect. The difficulty in correctly classifying lymphocyte findings either as within reference range or as atypical was pointed out in 1977 by Koepke.21 Using data based on proficiency-sample glass slides sent to more than 4000 laboratories, he reported a coefficient of variation of 88% for the atypical lymphocyte count.21 More recently, van der Meer and colleagues22,23 sent PowerPoint presentations of WBCs to 671 technologists at 114 hospital laboratories. That study22 also found significant interobserver variability in the classification of lymphocytes as atypical or within reference range. Furthermore, when the same cell was shown twice in the PowerPoint presentation, it was classified by 210 of the 617 observers (34%) as a different subtype.22,23 Because of the limited reproducibility of the atypical lymphocyte count, we felt that missing some cases with increased numbers of atypical lymphocytes was acceptable. Although no cases of more than 1% blasts were missed by our optimized settings in our validation set, we were concerned that we did not have a sufficient number of cases to adequately test the new criteria. For that reason, an additional 14 cases, flagged by factory-default criteria and confirmed to harbor blasts by manual differential, were analyzed. No cases of blasts would have been missed using optimized criteria, although the optimized blast flag only detected 12 of the 14 cases. The additional 2 cases were detected by a combination of the 4 remaining flags. We conclude that our optimized thresholds are, at minimum, no worse at the detection of blasts than the factory-default settings.
Table 5. False-Negative Samples Resulting From Optimized Criteria in the Validation Set (n = 378)
Abnormality Samples, No. Mean, % Range, %

.7% bands .5% atypical lymphocytes .3% metamyelocytes and/or myelocytes 1532

10 28 6

14.75 9.52 4.50

824 533 36

We were concerned, however, about the 10 cases of increased myeloid progenitors missed by our optimized criteria. The percentage of immature granulocytes is a reproducible parameter and is important in the diagnosis of many disease states.9 Review of patient histories showed that 6 of the missed cases represented acute infectious processes (cryptococcal meningitis, methicillinsusceptible Staphylococcus aureus bacteremia, infectious diarrheal disease, pediatric sepsis, and 2 cases of urinary tract infection in immunocompromised hosts). Follow-up on 3 additional samples revealed 1 patient who was recovering from extensive excision of a facial squamous cell carcinoma, 1 patient with sickle-cell disease and pain crisis, and 1 patient with a new onset pericarditis of hitherto undefined etiology. The final case was one of previously diagnosed chronic myeloid leukemia. The accurate enumeration of myeloid precursors was clinically important in these cases. The goal of our protocol was to improve the PPV of our system of flags, thereby reducing the number of manual differential WBC counts performed on false-positive specimens. The number of missed immature myeloid cells is a consequence and limitation of our method of optimization. Use of the maximized YI optimizes the relationship between true-positives and false-positives, thereby improving the PPV. However, that improved PPV came at the expense of a decrease in negative predictive value, particularly in the area of immature myeloid precursors, such as myelocytes and metamyelocytes. Our analysis was limited to improving the PPV because our data set included only samples flagged at factorydefault settings, thereby precluding estimation of a true baseline measure of the NPV. We considered reducing our immature granulocyte flag back to factory-settings, thereby reducing the number of missed myeloid progenitors from 10 to 5. Doing so would increase the number of false-positives in our sample from 106 to 113 and the review rate from 2.9% to 3.1%, not a substantial increase. Four of the other 5 missed cases would be detected only by reducing the left shift flag to the factory-default setting. However, doing so would increase our false-positive rate to 148 from 106 and, consequently, increased our review rate to 4% while decreasing the PPV to 34.1% and the efficiency to 54%. The last missed case would only have been detected by decreasing the blast flag to 110. An additional mechanism by which missed myeloid progenitors might be avoided is the concurrent introduction of the XE-IG master software with the new flagging system. That software allows the reporting of a parameter called immature granulocytes (promyelocytes, myelocytes, and metamyelocytes) directly from the analyzer, without a slide review.9,24 Additional studies are required to validate the use of that technology with our optimized flagging settings. The optimized flagging criteria described here reduces the review rate from the 5 flags studied from 6.5% to 2.9%.
Optimizing Flagging ThresholdsSireci et al

Arch Pathol Lab MedVol 134, October 2010

Further studies will be necessary to similarly decrease the review rates from other user-adjustable flags on our analyzers. In conclusion, we have developed a method for optimizing the thresholds for quantitative flags on automated cell counters and for reducing review rates. We improved the overall PPV of each flag for any abnormal finding and achieved flagging efficacies similar to studies using other analyzers. Although this study was performed on the Sysmex XE-2100/5000 line of analyzers, the overall approach to optimization can be used on any hematology analyzer that uses quantitative flagging criteria.
The authors thank Barbara J. Connell, MS, MT SH(ASCP), for advice and careful reading of the manuscript. This work is dedicated to the memory of Daniel J. Fink, MD, MPH, who initiated the research that culminated in this article.
References
1. Pierre RV. Peripheral blood film review: the demise of the eyecount leukocyte differential. Clin Lab Med. 2002;22(1):279297. 2. Hoyer JD. Leukocyte differential. Mayo Clin Proc. 1993;68(10):10271028. 3. Kratz A, Bengtsson HI, Casey JE, et al. Performance evaluation of the CellaVision DM96 system: WBC differentials by automated digital image analysis supported by an artificial neural network. Am J Clin Pathol. 2005;124(5):770 781. 4. Briggs C, Harrison P, Grant D, Staves J, Chavada N, Machin SJ. Performance evaluation of the Sysmex XE-2100TM, automated haematology analyser. Sysmex J Int. 1999;9(2):113119. 5. Gould N, Connell B, Dyer K, Richmond T. Performance evaluation of the Sysmex XE-2100, automated hematology analyzer. Sysmex J Int. 1999;9(2):120 128. 6. Fujimoto K. Principles of measurement in hematology analyzers manufactured by Sysmex Corporation. Sysmex J Int. 1999;9(1):3140. 7. Hiroyuki I. Overview of automated hematology analyzer XE-2100. Sysmex J Int. 1999;9(1):5864. 8. Briggs C, Kunka S, Fujimoto H, Hamaguchi Y, Davis BH, Machin SJ. Evaluation of immature granulocyte counts by the XE-IG master: upgraded software for the XE-2100 automated hematology analyzer. Lab Hematol. 2003; 9(3):117124. 9. Ansari-Lari MA, Kickler TS, Borowitz MJ. Immature granulocyte measurement using the Sysmex XE-2100. Relationship to infection and sepsis. Am J Clin Pathol. 2003;120(5):795799.

10. National Committee for Clinical Laboratory Standards. Reference Leukocyte Differential Count (Proportional) and Evaluation of Instrumental Methods. Villanova, PA: NCCLS; 1992. Approved NCCLS document H20-A. 11. Barnes PW, McFadden SL, Machin SJ, Simson E. The international consensus group for hematology review: suggested criteria for action following automated CBC and WBC differential analysis. Lab Hematol. 2005; 11(2):8390. 12. Schisterman EF, Perkins NJ, Liu A, Bondell H. Optimal cut-point and its corresponding Youden Index to discriminate individuals using pooled blood samples. Epidemiology. 2005;16(1):7381. 13. Hilden J, Glasziou P. Regret graphs, diagnostic uncertainty and Youdens Index. Stat Med. 1996;15(10):969986. 14. Pekkanen J, Pearce N. Defining asthma in epidemiological studies. Eur Respir J. 1999;14(4):951957. 15. John R, Lifshitz MR, Jhang J, Fink DJ. Post-analysis: medical decisionmaking. In: McPherson RA, Pincus MR, eds. Henrys Clinical Diagnosis and Management by Laboratory Methods. 21st ed. Philadelphia, PA: Elsevier; 2007: 6875. 16. Lacombe F, Cazaux N, Briais A, et al. Evaluation of the leukocyte differential flags on an hematologic analyzer: the Cobas Argos 5 Diff. Am J Clin Pathol. 1995;104(5):495502. 17. Ruzicka K, Veitl M, Thalhammer-Scherrer R, Schwarzinger I. The new hematology analyzer Sysmex XE-2100: performance evaluation of a novel white blood cell differential technology. Arch Pathol Lab Med. 2001;125(3):391396. 18. Korninger L, Mustafa G, Schwarzinger I. The haematology analyser SF3000: performance of the automated white blood cell differential count in comparison to the haematology analyser NE-1500. Clin Lab Haematol. 1998; 20(2):8186. 19. Thalhammer-Scherrer R, Knobl P, Korninger L, Schwarzinger I. Automated five-part white blood cell differential counts: efficiency of software-generated white blood cell suspect flags of the hematology analyzers Sysmex SE-9000, Sysmex NE-8000, and Coulter STKS. Arch Pathol Lab Med. 1997;121(6):573 577. 20. Cornbleet PJ. Clinical utility of the band count. Clin Lab Med. 2002;22(1): 101136. 21. Koepke JA. A delineation of performance criteria for the differentiation of leukocytes. Am J Clin Pathol. 1977;68(1)(suppl):202206. 22. van der Meer W, Scott CS, de Keijzer MH. Automated flagging influences the inconsistency and bias of band cell and atypical lymphocyte morphological differentials. Clin Chem Lab Med. 2004;42(4):371377. 23. van der Meer W, van Gelder W, de Keijzer R, Willems H. The divergent morphological classification of variant lymphocytes in blood smears. J Clin Pathol. 2007;60(7):838839. 24. Briggs C, Kunka S, Pennaneach C, Forbes L, Machin SJ. Performance evaluation of a new compact hematology analyzer, the Sysmex pocH-100i. Lab Hematol. 2003;9(4):225233.

Arch Pathol Lab MedVol 134, October 2010

Optimizing Flagging ThresholdsSireci et al

1533

You might also like