Professional Documents
Culture Documents
VOLUME 142
FANYOU KONG
Center for Analysis and Prediction of Storms, University of Oklahoma, Norman, Oklahoma
MING XUE
School of Meteorology and Center for Analysis and Prediction of Storms, University of Oklahoma, Norman, Oklahoma
(Manuscript received 22 September 2013, in final form 14 January 2014)
ABSTRACT
Two approaches for accounting for errors in quantitative precipitation forecasts (QPFs) due to uncertainty
in the microphysics (MP) parameterization in a convection-allowing ensemble are examined. They include
mixed MP (MMP) composed mostly of double-moment schemes and perturbing parameters within the Weather
Research and Forecasting single-moment 6-class microphysics scheme (WSM6) MP scheme (PPMP). Thirtyfive cases of real-time storm-scale ensemble forecasts produced by the Center for Analysis and Prediction of
Storms during the NOAA Hazardous Weather Testbed 2011 Spring Experiment were examined.
The MMP ensemble had better fractions Brier scores (FBSs) for most lead times and thresholds, but the
PPMP ensemble had better relative operating characteristic (ROC) scores for higher precipitation thresholds.
The pooled ensemble formed by randomly drawing five members from the MMP and PPMP ensembles was
no more skillful than the more accurate of the MMP and PPMP ensembles. Significant positive impact was
found when the two were combined to form a larger ensemble.
The QPF and the systematic behaviors of derived microphysical variables were also examined. The skill
of the QPF among different members depended on the thresholds, verification metrics, and forecast lead
times. The profiles of microphysics variables from the double-moment schemes contained more variation in the
vertical than those from the single-moment members. Among the double-moment schemes, WDM6 produced
the smallest raindrops and very large number concentrations. Among the PPMP members, the behaviors were
found to be consistent with the prescribed intercept parameters. The perturbed intercept parameters used in the
PPMP ensemble fell within the range of values retrieved from the double-moment schemes.
1. Introduction
Numerical weather prediction (NWP) using an ensemble of forecasts has become increasingly popular
since the early 1990s when such centers as the National
Centers for Environmental Prediction in the United
States and the European Centre for Medium-Range
Weather Forecasts began running global ensembles on
Corresponding author address: Jeff Duda, School of Meteorology and Center for Analysis and Prediction of Storms, University
of Oklahoma, Rm. 2500, 120 David Boren Blvd., Norman, OK
73072.
E-mail: jeffduda319@gmail.com
DOI: 10.1175/MWR-D-13-00297.1
2014 American Meteorological Society
JUNE 2014
DUDA ET AL.
2199
2200
VOLUME 142
FIG. 1. Model domain (circled 1) and verification domain (circled 2) used in this study.
2. Experimental setup
a. Model description
Version 3.2.1 of the Weather Research and Forecasting Model (WRF), with the Advanced Research
dynamic core (ARW; Skamarock et al. 2008), was used
as the NWP model. WRF-ARW was the primary NWP
model used in the Storm-Scale Ensemble Forecast system (SSEF) conducted by the Center for Analysis and
Prediction of Storms and used in the National Oceanic
and Atmospheric Administrations (NOAA) Hazardous
Weather Testbed (HWT) 2011 Spring Experiment (Kong
et al. 2011). The 2011 Spring Experiment extended from
late April to early June, which included 35 cases [generally
JUNE 2014
2201
DUDA ET AL.
TABLE 1. Description of the 10 SSEF members used in this study. The first six members (in boldface) compose the MMP ensemble,
whereas the bottom five members (in italics) compose the PPMP ensemble.
Member name and scheme
(perturbation)
No. of classesa
Thompson
Ferrier1b
Morrison
MilbrandtYauc
WDM6
WSM6 (control)
WSM6 (M1)
WSM6 (M2)
WSM6 (M3)
WSM6 (M4)
6
3
6
7
6
6
6
6
6
6
Double-moment species
Rain, ice
N0r (m24)
N0g (m24)
rg (kg m23)
ar
ag
8 3 106
8 3 106
8 3 106
8 3 107
8 3 105
8 3 105
Variable
4 3 106
4 3 106
4 3 104
4 3 106
4 3 102
4 3 103
400
400
400
500
500
913
500
913
913
0
2
1
0
0
0
0
0
0
0
0
0
0
0
0
0
Water classes (species) include vapor, liquid cloud drops, frozen cloud ice particles, rain, snow, graupel, and hail. Hail was the least
common species included.
b
The Ferrier1 scheme treats all condensate species as one.
c
The MilbrandtYau scheme adds hail as a separate class with rh 5 900 kg m23 and ah 5 3.
b. Verification methodology
Verification was performed over a portion of the model
domain containing the central and southern United
States (Fig. 1). This domain was chosen because the environment in this region during the late spring and early
summer supports organized and intense convection with
tight spatial gradients. Therefore, the verifications in this
region are a representative evaluation of the skill of
convective-scale warm-season QPF. A variety of verification metrics were used, including gridpoint-based,
contingency-table-based, and neighborhood-based metrics. These metrics are introduced as they are encountered.
Probabilistic verification was computed using uncalibrated
model output. Therefore, probability of precipitation
(PoP) was computed as the proportion of members in
1
The thresholds tested included 0.254, 2.54, 6.35, 12.7, and
25.4 mm (0.01, 0.10, 0.25, 0.50, and 1.00 in.).
2202
2
We use f01 to mean forecast hour 1. This symbolism will be used
throughout the remainder of the paper.
VOLUME 142
JUNE 2014
DUDA ET AL.
2203
FIG. 3. As in Fig. 2, but for frequency biases at the various thresholds indicated.
Regarding equitable threat scores (ETS), a large decrease in skill from very high values within the first one
or two forecast hours was observed (Fig. 4). This higher
skill at early forecast hours can be attributed to the
initialization of preexisting convection within the initial
conditions from the assimilation of radar data (e.g., Xue
et al. 2008, 2013), the effects of which wore off quickly.
The benefits to assimilating radar data for very shortrange convective-scale QPF on the order of a few hours
have been observed in previous SSEF forecasts (e.g.,
Johnson and Wang 2012; Kong et al. 2011; Xue et al.
2008, 2009, 2011, 2013). ETS were very similar among
the members with a few exceptions; the Thompson and
Morrison schemes had higher ETS than the other members for most forecast hours at the lightest threshold. The
2204
VOLUME 142
JUNE 2014
DUDA ET AL.
2205
2206
VOLUME 142
FIG. 6. As in Fig. 3, but for the area under the ROC curve.
JUNE 2014
DUDA ET AL.
FIG. 7. (a) Domain- and case-average mean error of 1-h ensemble mean QPF. Diamonds across the bottom indicate forecast
hours at which the difference between the MMP and PPMP ensembles was significant, with the color indicating which ensemble
was superior, whereas filled circles across the top indicate forecast
hours at which the difference between the MMP and pooled ensembles was significant. (b) Domain-average 1-h ensemble mean
QPF and observed QPF (black line).
2207
2208
FIG. 8. Rank histograms for 1-h QPF integrated over all forecast
hours for the (a) MMP, (b) PPMP, and (c) pooled ensembles. The
parameter ext, defined as the ratio of the proportion of values in the
extreme ranks to the proportion of values in the extreme ranks of
a flat histogram, is displayed at the top right of each panel. A rank
histogram of a perfectly dispersive ensemble is indicated by the
dotted black line in each panel.
VOLUME 142
JUNE 2014
DUDA ET AL.
2209
FIG. 9. FBSs for 1-h ensemble mean QPF for the various thresholds indicated in the top right of each panel.
Significant differences are indicated as in Fig. 7.
2210
VOLUME 142
FIG. 10. As in Fig. 9, but for the area under the ROC curve.
microphysics scheme errors in the ensemble, in addition to evaluating the ensembles using objective verification scores, we examined the systematic behaviors
of various microphysical variables from the PPMP and
MMP ensemble members. The following analysis is
meant to answer the following questions: How did the
prescribed intercept parameters for rain and graupel in
the WSM6 schemes compare to those retrieved from
the double-moment microphysics schemes? How different are the microphysics parameters between single
and double-moment schemes? How did the WDM6
and WSM6 schemes in particular compare?
Using the method discussed in the appendix, various
parameters in these MP schemes were retrieved from the
forecast moments fields. Parameters were not retrieved
from the Ferrier1 scheme due to significant differences between that scheme and the others regarding the
treatment of the water species. Numerous additional assumptions regarding the rain and graupel PSDs would be
required to retrieve parameters from the Ferrier1
scheme. Since the graupel intercept parameter is diagnosed from the mixing ratio internally in the Thompson scheme, the graupel intercept parameter was not
retrieved from the Thompson scheme. We chose specifically to examine the mean mass particle diameter, particle surface area, number concentration, mixing ratio,
and intercept parameter for the rain and graupel PSDs.
The former two parameters serve as proxies for the tendencies for rain evaporation and cold pool formation
(Dawson et al. 2010), which can have a large impact on the
organization and lifetime of convective systems (Rotunno
et al. 1988; Snook and Xue 2008).
Vertical profiles of horizontally averaged microphysical parameters for rain are shown in Fig. 11, while those
JUNE 2014
DUDA ET AL.
2211
FIG. 11. Vertical profiles of area-average microphysics parameters: (a) mean mass raindrop diameter (mm),
(b) raindrop surface area (m2), (c) rain number concentration (drops per cubic meter), (d) normalized rain intercept
parameter (m24), (e) rain mixing ratio (kg kg21), and (f) number of grid points at which the rain mixing ratio
exceeded 1026 kg kg21.
2212
VOLUME 142
JUNE 2014
DUDA ET AL.
2213
2214
VOLUME 142
JUNE 2014
2215
DUDA ET AL.
ensemble prediction system. Ideally, members of an ensemble should represent random draws from the probability distribution of truth for a given forecast system and
each member should be equally possible to represent the
truth. The design of the PPMP and MMP ensembles may
violate such rules. The parameters used in the PPMP
ensemble were not randomly perturbed. The MMP ensemble is composed of different microphysics schemes,
with each scheme having its own bias. In addition, only
the uncertainty in QPF associated with varied microphysics was tested, but uncertainty in QPF from model
error can also come from errors in other physics parameterizations such as the boundary layer, radiation, and
land surface. Since only QPF was examined, it cannot be
determined whether either approach is more skillful at
predicting other meteorological fields such as surface
temperature, relative humidity, wind speed, or geopotential height at other levels in the troposphere. Thus,
future studies should also examine the impacts of varied
microphysics on other fields. The specific conclusions
may also be dependent on the specific choices of the MP
schemes used in the ensemble, and to the way parameters
are perturbed. Future studies should investigate ways to
use variations in these parameterizations to better design
convection-allowing model ensembles, as their use will
be more strongly desired in the future for operational
weather forecasting purposes.
Acknowledgments. This research was primarily supported by NSF Grants AGS-1046081, AGS-0802888, and
OCI-0905040. The CAPS storm-scale ensemble members
analyzed in this study were also produced under the
support of NOAAs CSTAR program (NA17RJ1227).
Computing resources at the National Science Foundation XSEDE National Institute of Computational
Science (NICS) at the University of Tennessee and
those at the OU Supercomputing Center for Education
and Research (OSCER) were used for this study. Ming
Xue was also supported by NSF Grants AGS-0941491
and AGS-1046171. Dan Dawson is acknowledged for
discussions during the initial stages of the work. This
manuscript was improved by the suggestions of two
anonymous reviewers.
APPENDIX
Microphysics Vocabulary
Except for the Ferrier1 scheme, all of the MP schemes
used in this experiment use similar assumptions for the
PSDs of the various classes. They all use the following
model:
(A1)
Dp n(D) dD 5
N0
G(a 1 p 1 1) ,
a1p11
(A2)
2216
(A4)
r q
D5 a
cx Nt
1/3
.
(A5)
REFERENCES
Baldwin, M. E., S. Lakshmivarahan, and J. S. Kain, 2001: Verification of mesoscale features in NWP models. Preprints, Ninth
Conf. on Mesoscale Processes, Ft. Lauderdale, FL, Amer.
Meteor. Soc., 8.3. [Available online at https://ams.confex.com/
ams/pdfpapers/23364.pdf.]
Berner, J., G. Shutts, M. Leutbecher, and T. Palmer, 2009: A
spectral stochastic kinetic energy backscatter scheme and its
impact on flow-dependent predictability in the ECMWF
ensemble prediction system. J. Atmos. Sci., 66, 603626,
doi:10.1175/2008JAS2677.1.
, S.-Y. Ha, J. P. Hacker, A. Fournier, and C. Snyder, 2011:
Model uncertainty in a mesoscale ensemble prediction system:
Stochastic versus multiphysics representations. Mon. Wea.
Rev., 139, 19721995, doi:10.1175/2010MWR3595.1.
Buizza, R., and T. N. Palmer, 1995: The singular-vector structure of
the atmospheric global circulation. J. Atmos. Sci., 52, 14341456,
doi:10.1175/1520-0469(1995)052,1434:TSVSOT.2.0.CO;2.
, M. Miller, and T. N. Palmer, 1999: Stochastic representation of
model uncertainties in the ECMWF ensemble prediction system. Quart. J. Roy. Meteor. Soc., 125B, 28872908, doi:10.1002/
qj.49712556006.
Candille, G., 2009: The multiensemble approach: The NAEFS
example. Mon. Wea. Rev., 137, 16551665, doi:10.1175/
2008MWR2682.1.
Caron, J.-F., 2013: Mismatching perturbations at the lateral boundaries in limited-area ensemble forecasting: A case study. Mon.
Wea. Rev., 141, 356374, doi:10.1175/MWR-D-12-00051.1.
Charron, M., G. Pellerin, L. Spacek, P. L. Houtekamer, N. Gagnon,
H. L. Mitchell, and L. Michelin, 2010: Toward random sampling
of model error in the Canadian ensemble prediction system.
Mon. Wea. Rev., 138, 18771901, doi:10.1175/2009MWR3187.1.
Clark, A. J., W. A. Gallus Jr., and T. C. Chen, 2008: Contributions of
mixed physics versus perturbed initial/lateral boundary conditions to ensemble-based precipitation forecast skill. Mon. Wea.
Rev., 136, 21402156, doi:10.1175/2007MWR2029.1.
, , M. Xue, and F. Kong, 2009: A comparison of precipitation forecast skill between small convection-allowing and
large convection parameterizing ensembles. Wea. Forecasting,
24, 11211140, doi:10.1175/2009WAF2222222.1.
, W. A. Gallus, M. Xue, and F. Kong, 2010: Growth of spread in
convection-allowing and convection-parameterizing ensembles.
Wea. Forecasting, 25, 594612, doi:10.1175/2009WAF2222318.1.
VOLUME 142
JUNE 2014
DUDA ET AL.
2217
2218
VOLUME 142
JUNE 2014
DUDA ET AL.
2219
Yussouf, N., and D. J. Stensrud, 2012: Comparison of singleparameter and multiparameter ensembles for assimilation of
radar observations using the ensemble Kalman filter. Mon.
Wea. Rev., 140, 562586, doi:10.1175/MWR-D-10-05074.1.
Zhang, J., and Coauthors, 2011: National Mosaic and Multi-sensor
QPE (NMQ) system: Description, results, and future plans.
Bull. Amer. Meteor. Soc., 92, 13211338, doi:10.1175/
2011BAMS-D-11-00047.1.