You are on page 1of 9

1236

IEEE Transactions on Power Systems, Vol. 11, No. 3 , August 1996

DISTRIBUTED NATURE SIDENTIAL CUSTOMER OUTAGE COSTS


R. Ghajar
Power Math Associates, Inc. 12625 High Bluff Drive, Suite 103 San Diego, CA 92130-2380
Abstract Reliability worth assessment is an important factor in power system planning and operation. An equally important issue is how to use customer costs of electric supply interruptions as surrogates to appropriately quantify reliability worth. Postal or in-person surveys of electric customers are often used to determine interruption costs. The results obtained from the surveys are transformed into customer damage functions which are applicable to individual customer classes and sectors. Standard customer damage functions use aggregate or average customer costs for selected outnge durations. This paper develops a practical alternative to the customer damage function method of describing the interruption cost data. The alternate technique, which is clcsignatcd as the probability distribution approach, is capable of recognizing the dispersed nature of the data. The proposed probiibility distributiim rncthod is illustrated in this paper using the interruption cost data collected in a 1991 survey of the Canadian residential sector.

R. Billinton

E. Chan

Power Systems Research Group University of Saskatchewan Saskatoon, Saskatchewan worth of having it. The ability to assess the level of reliability within the system and the costs associated with it are well established [ 11. Procedures for conducting worth assessment are beginning to mature, but it is still too early to say that there is consensus on appropriate methodologies. Assessing the worth of service reliability is it difficult and subjective task. Most of the evaluation methods are based on the assessment of the effects and impacts of unreliability [2-41. The results from such assessments are not equal to the worth of reliability, but merely surrogates of it. The customer survey approach to calculate reliability worth appears to be the method favoured by most electric utilities. It is based on the premise that the customer is i n the best position to assess the monetary losses associated with a power failure. The assessment of reliability worth for a particular customer sector using the survey approach is based on the usable outage cost data gathered from the respondents i n that sector. Utilization in a practical context involves converting the gathered data into a functio11ilI rcprcsentation o r a cost model for the given sector. The traditional cost model is known as a Customer Darnage Function (CDF). This model defines the aggregate or average cost of interruption for a specified clijss of customcrs as a function of duration in a given studied area. An important question arising in this approach, is how well do the aggregate or average values represent the entire response? Interruption cost analyses conducted at the University of Saskatchewatl show that the monetary values exhibit large vilriatio1ls a n d in some cases, the standard deviation is more than threc times the mean value. Relatively little consideration has been given to the variation of cost values about their iueans or expected values. The dispersion of the customer intcrruption cost data is important information that should bc incorporated into the appraisal of electric service reliability worth. This paper introduces a new approach, designated as the probability distribution method, to customer cost cvillL1:itiotl. This method recognizes the dispersed iiaturc of the interruption cost data a n d provides a new diiucnsion i n realistic and effective assessment of the losscs incurred by electric users due to power failures.

I. INTRODUCTION
A modern power system has the primary function of satisfying the system load and energy requirements at the' lowest possible cost and with a reasonable assurance of conlinuity and quality of supply. Customer expectations rcgarding quality of service are continually increasing due to thc high degree of dependence placed on electrical energy in today's social and working environment. In addition to quality of service, consumers also expect to receive elcctricity at the lowest possible cost. In order to balance the economic and reliability concerns, utilities incorporate both reliability criteria and cost considerations in their decision making processes. Conceptually, performing a reliability cost/reliability worth analysis requires an assessment of the cosls 01' providing reliable service and a quantification of the
95 SM 510-8 PWRS A paper recommended and approved by the IEEE Power System Engineering Committee Of the IEEE Power Engineering Society f o r presentation at the 1995 IEEE/PES Summer Meeting, July 23-27, 1995. Portland, OR. Manuscript submitted December 5, 1994; made available f o r printing July 6 , 1995.

0885-8950/96/$05.00 0 1995 IEEE

1237

11. COST OF INTERRUPTION DATA


Over the last thirteen years, the Power Systems Research Group at the University of Saskatchewan, i.e. the Group, has conducted several customer surveys to determine the impacts of power interruptions o n Canadian electrical users. The Group conducted its first extensive study of interruption costs in the residential [ 5 ] , commercial and small industrial [6] sectors in 1981. The second survey, which covered the agricultural sector, was conducted in 1985 [7]. A further survey of the residential, commercial and small industrial sectors was conducted in 1991 to verify and augment the available data and to understand the changes, if any, that had occurred to customer perceptions of the costs over the previous ten-year period [8]. For example, did inflation, recession, change in the cost of living, etc. affect the costs? Currently, the Group is engaged in surveying other customer sectors including office buildings, governments and institutions.

outage cost as a function of interruption duration. The average and aggregate outage cost for a particular duration can be calculated using (1-3). These equations can be easily applied to residential sector data but have to be modified for other customer sectors in order to recognize the various subgroups within each sector [8,10].

Y cost:
Average cost per interruption

Aggregate consumption- = normnlized cost

3
m

y cost;
( $ / k w h ) and

(2)

consi

Aggregatepeaknormalized cost

- %
m
U

y cost,
($lkW)(3)

The interruption cost data illustrated in this paper are based on the usable responses from the latest residential survey conducted by the Group. Respondents were asked to provide interruption cost estimates based o n a series of prcparatory actions associated with failures of specified durations. The individual costs supplied by the respondents often exhibit large variations due to the great diversity in the ciicrgy requircments of the respondents. Dollar o r absolute values are therefore normalized by the corresponding energy rcquiremcnts in order to reduce the cost variations between respontlcnts [SI. The energy requirements of customers can bc determined from their annual consumption (kwh) or annual pcak demand (kW). Customers in the residential scctor arc not typically monitored for peak demand and consequently, a load factor of 0.23 was assumed based on research conducted by Manitoba Hydro in order to convert the cnergy requirements into peak load demand. The peak load normalized values in $/kW are of primary importance i n studics pertaining to system cost and benefit as they can be used i n subsequent calculations leading to the overall cost of unservcd energy (91. The following sections describe and illustrate the conventional customer damage ftinction mcthod and the probability distribution approach to representing the residential interruption cost data from the
1991 survey.

7 peak,
i= 1

The following variables are defined for each respondent i. costi is the cost estimate in dollars, cons, is the annuill consumption in kWh and peuk, is the annual peak demand in kW. k is the iiumbcr ofusablc cost estinutes and m is the number of respondents for which both usable cost estimates and energy consumption values are available. The energy consumption values were used to calculatc the peaknormalized costs using a load factor of 0.23. Equation 1 is a siniple calculation of the ilverilge dollar per interruption cost in the studied sector. The two normalized costs represent aggrcgatcd averi1g:c.sas opposcd to the simple expected value. Cillculatiol1 of t tic aggregated averages using (2) and (3) is performed by summing the dollar costs for the respondents in the sector and dividing i t by the total of the energy consumption or peak demand associated with the sector. The aggregating process reduces the effects of respondents that have relatively low consumptions and high interruption cost estimates. The aggregated cost calculations include only those rcspondeiits from whom both dollar cost estimates and itI1li~lal encrbv consumption values arc available. Customer iiitcrruption costs gathered during the 1991 survey arc iiscd to illustrillc this process. The ilvcrilgc, ilggrcgatcd consumptionnormalized arid ilggrcg:itcd pea k-11ormiIlizd costs were calculated for each surveyed duration i n the residential sector and are given in Table I. Each cost value and its respective interruption duratioil given in Table I can be considered a s a pair of observations where the duration is the independent variable and the cost is the dependent variable. The sample sizcs used lo

11I. CONVENTIONAL CUSTOMER DAMAGE FUNCTION METHOD

Interruption cost data collected from electrical custoincrs are duration specific as customers are asked to provide their best estiiiiatcs of iiioiietary losscs undcr scvcral clifrerent outage scenarios. Studies of this type provide data which CilIl be conveniently used to create Customer Damage Functions (CDF's) for specific customer classes and sectors [lo]. The basic idea in the CDF approach is to model the

1238

calculate each average and aggregate cost are also shown in Table I. It can be seen that the sample sizes used in the aggregation process are smaller than those used in the simple averaging process. This is due to the lack of information on energy consumption for some respondents.
TABLE I
Average, aggregated consurnption-normaliz.edand aggregated peak-norrnali7, costs for the residential sector. INTERRUPTION COSTS average

Iv. DISTRIBUTED NATURE Tble OF


CUSTOMER OUTAGE COSTS
A conventional customer damage function for a given sector is easy to develop and use but it does not portray the dispersed nature of the interruption cost data. The outage costs represented by this function are essentially average or aggregate values where each value provides a meilsure of the central tendency of a set of data for a particular interruption duration. These values do not provide any indication of the spread among the data (e.g. range, standard deviation) or its skewness, i.e. deviation from a symmetrical distribution. There is, therefore, a need for a model that can represent the full range of interruption cost data in a concise manner [ 111.

duration 20 1 4 8 min hr hrs hrs 1 day

$lint
0.22 1.27 14.02 29.83 135.22

sample size

$/kwh
0000014 O.oooO81 0000900 0001986 0009058

1s:7L1e
1351 1345 1343 1328 1313

$/kW
0.0278 0.1626 1.8126 40006 18.2491

sample size 1351 1345 1343 1328 1313

1751 1753 1746 1720 1701

The aggregated peak-normalized costs ($/kW given in Table I are plotted in Fig. 1 to visually show the customer dainagc function for the residential sector. In this figure, both axes have been transformed to a logarithmic scale in order to cover the wide range of values. The CDF shown in this figu re exhibits a piece-wise lincarly increasing relationship in which a segment between any two successive studied interruption durations is described by a straight line. The intcrruption cost C, corresponding to an intermediate outage duration di which is not surveyed can be determined by linear interpolation as shown in Fig. 1.

Before proposing a new interruption cost model, special attention must be paid to the conditions under which d a t a are provided. These conditions or factors cilll affect the magnitude of an outage cost estimate. I n the customer damage function method, aggregate costs wcrc estimated for each interruption duration using (2) or (3). Utiliution of these equations results in miiiiniihng thc cft'ccts of respondents with high outage costs and low consumption. Such effects are important to the distribution analysis and must be included i n the cillctllatioll of the outagc costs for each respondent i as given by the following equations. The variables cost,, peakL, cons,, k and m have the same definitions as in (1-3).

Dollar cost per interruption


Consumptionnormalized cost

= cost;
= cos t,

i = 1, k
i = 1, /n

($/kt),

(4)

( $ / k W h ) aIld(5)

consi

10.00

p.

Peak- normalized = Cost; cost P'flk,

i = 1, /n

($/kW).

(6)

1.m

(3,
1

0.10

0.01

I I

In order to illustrate the dispersion of interruption cost data for the residential sector, a number of basic statistics were derived from the peak normalized costs given by (6). These statistics are sumniarized in Table 11. The fact that the $/kW values display large variations (i.e. standard deviation), which in most cases is over threc times the mean value, emphasize the need to have some iiidiciition of how representative are the inean values. For example, consider the statistics associated with the 20 minutes interruption scenario given in Table 11. Although the average cost is low (0.0560 $/kW), some users have significantly larger losses (9.7 $ / k W . The positive skewness value ascociated with each interruption duration implies that the number of respondents with costs higher than the mean value greatly exceeds those with smaller costs.

Fig. 1. Customer damage function of the Canadian residential sector.

1239
TABLE I1
Basic statistics for the peak-nornlalized cost data of the residential sector.

I
20 min 1 hr 4 hrs 8 hrs 1 day

STATISTICS OF %/kWINTERRUPTION COSTS

ll

0.0560 0.3167 3.1059 8.3328 35.6844

0.4046 1.4051 7.8015 42.9326 181.8560

00 . 00 . 00 .
00 . 00 .

97 . 29.1 130.7 1132.6 5897.0

20.4313 15.0871 10.9324 20.2936 27.9781

1290 1285 1283 1268 1255

In the preceding discussion, the mean values of the outage costs (Column 2 of Table 11) are different from the corresponding aggregated values used in the conventional cusloiiicr damage function method (Column 6 of Table I). The two sets of cost data are compared i n Fig. 2 together with the histograms of the peak-normalized outage costs. It can be seen from this figure that the aggregated values are lower than the corresponding mean values for each interruption duration. This observation is to be expected a s the aggregation process is designed to minimize the effects of outliers and therefore reduce the value of the mean. It should be noted that part of the difference between the mean and aggregate values shown in Fig. 2 may be due to the difference in the corresponding sample sizes.

The clustering of bars in each histogram shown in Fig. 2 is the result of using a logarithmic scale for the interruption cost axis. The Gequency axis OC each histograni contains a break in order to illustrate the high value of the first bar and the low values of the remaining bars. That is, the shape of the histograms illustrate the highly skewed nature of the outage cost estimates and the wide range of values that are possible for each interruption duration. This suggests that the aggregate values represent conservative estimates of the true reliability worth for a given customer group. It is evident therefore that there is a need to develop a new approach to describing these costs [ 111.

V. PROBABILITY DISTRIBUTION AFTROACH


Utilization of the actual interruption cost data in reliability worth assessment is possible if this data is interruption scenarios. available for all possible Unfortunately, a survey questionnaire can only include a very limited number of interruption scenarios and therefore cost estimates are not readily i~vailahlefor cvcry possible case. In order to establish a cost model at a n intermediate or non-questioned duration, the analysis will havc io usc results obtained from adjacent questioned durations. The problem then becomes one of how to infer intermediate costs using statistics calculated from the known durations. Finding the outage cost a t a n intermediate duration is simple and straight forward when a conventional customer damage function is used to describe the interruption costs. A outlined earlier in this paper, linear interpolation s between two calculated aggregate values from the adjacent 1 studied durations is used i n most cases. This task is not i s straightforward if the d a t a a t eilch studied duration hils ii different distribution (Fig. 2). Direct interpolation is iiot applicable in this case as the corrcliltioli between the various distributions is extremely difficult to define. The actual outage costs at the durations that were surveyed must bc represented by a common set of distributions that allow interpolations to bc made. Unless a formal examination is conducted, it is difficult, if not impossible, to identify the distribution model which best fits a given set of cost data. Rather than arbitrarily selecting a probability distribution illid cxamining its appropriateness to describe the data, a more systeinatir procedure should be used to conduct the analysis. The idea is to transform the cost values such thilt they ciili be represented by the desired distribution [ 11,121. The nornial distribution was selected as the desired model for the residential cost data because of its popularity and simplicity. Other advantages of the normal model are:

Fig. 2. Comparison between the aggregated and mean peak-normalized cost data for the residential sec~or.

1. There are more tools available to test the goddness-offit for normal distributions than any other distribution.

1240

The best known goodness-of-fit technique for testing normality is the moment test [13].

Therefore, it was decided to first focus on the skewness value. The closer the skewness value is to zero, the more symmetrical is the distribution. The measure ol skewness will typically never be exactly zero for a given set of sample data, but will fluctuate about zero because of sampling variations or an imperfect random process. The approach used to find the best symmetrical transformation involves an iterative search for the value of h which gives a minimum value of the skewness. The last value of h generated in the process gives the best symmetrical trallsformiltion. This transformation does not necessarily produce a n appropriate normal distribution because such distributions must also satisfy the kurtosis condition. The above iterative proccss gcncratcs a set of approximate symmetrical distributions for each interruption duration using different values of h. In ordcr to select the most +equate set of normally distributed d a t a from these distributions, each transformation must be checked against the goodness-of-fit tests of normality [ 131 using the kurtosis together with the skewness criterion. Selection of the best value of h was performed using a hypothesis testing procedure [ 141 which consists of two major steps. The first step involves formulating a hypothesis regarding the normality of thc transformed cost data for a particular value of h while the second consists of determining test criteria designed to assess the validity of the hypothesis. Acceptance and rejection of the hypothesis are based on the values of the skewness and kurtosis of the transformed data. The hypothesis testing procedure descri t x d above was incorporated in the iterative process in such a way t h a t only those transformations which satisfy the hypothcsis test are retained. If more than one set of iteration results are stored, the group with the smallest skewness value i5 selected and the associated value of h is the best normality transformation factor. The application of the above approilch to [he 1991 residential cost data led to the conversion of the highly skewed histograms shown in Fig. 2 to normal probability distributions with the characteristics (mean iIIld variance) given in Table 111. These distributions can be used morc readily to interpolate the distributions of intermediate durations a s given in the following section. Although zcrovalued data were not used to build the distribution model, these data represent a special group of respondcnts who believe that power failures have absolutely no monetary impact on their functions and activities. The "0" data, if neglected complctcly, will cause an over-estimation of the reliability worth. Therefore, the quantity of "0" data must be known and retained so that it can be used a t a later stage i n

2. The normal distribution is an essential requirement for This means that the inferential slalistics [ 141.
distributions at intermediate durations can be derived from those directly surveyed. The interpolated distribution will also be normal. This property cannot be assured when other distribution models are used. It is believed that finding the relationship between various normal parameters is a more realistic task than attempting to determine the correlation between various distinct models. The remainder of this paper is concerned with the application of the normality transformation to the 1991 residential customer interruption cost data and the procedures needed to draw inferences regarding the distributions at intermediate durations.

V.1. NorrntrIity Trnnsformation


The objective of this transformation is to convert the outage cost data for each interruption duration to a normal distribution that can be used in power system reliability worth assessment. The major assumptions in this analysis arc that the customer survey sampling procedure provides truly rill1tlol11 interruption cost data and that the customer cost variations can be described by continuous probability distributions. The following transformation, belonging to a family of power transformations, was selected for the work presented in this paper [12]:
(7)

where x rcfers to the original value, h is the power exponent and y is the transformed value. There are two limitations to this family of equations. It applies only to continuous vilriablcs iInd it docs not apply to zcro-valucd data. In ordcr to satisfy these constraints, zero value custonier outage cost observations were extracted from the duration specific data and treatcd separately. The remaining data were analyzed for normality using an iterative process designed to determine the value of h which best transforms a set of cost data into a normal distribution.

A perfect normal distribution is characterized by both thc third and fourth standardized moments being zero. The third moment characterizes the skewness of a distribution, i.e. deviation from symmetry, while the fourth moment gives thc kurtosis or tail-thickness 1141. When a group of trilnsforincd cost data is examined for normality, it is difficult to consider both elements simultaneously.

1241

the analysis. The frequency of 0 data is expressed in terms of the proportion of these data in the total number of usable responses. This proportion is denoted as P, and is also givcn in Table 111.
TABLE III
Transformation factor, distributionparametersand probability of zero costs Corresponding t each interruption duration of the residentialsector. o

achieved. The resulting regression curve for each parameter and the corresponding value of R 2 are superimposed on each graph. The equation of each curve as a function of the interruption duration d in minutes is also shown.

-0.0105 -0.0238

0.2886 1.1345 1.8289

1.6551 1.5725 1.7337

0.0265 0.0426 0.0151


d = 1S749.9.0o5Rbg(d)+l6507ybs(d)/
Pz= a8721 x/h#/d)/ a

V.2. Distributions At Intermediate Durations


The distributed nature of the interruption cost data at a specific outage duration is characterized by the following parameters:
1. normality power transformation factor, h , 2. mean of the normal-transformed distribution, p ,

3. variance of the normal-transformed distribution, o2 and 4. proportion of zero-valued data, P,. This set of parametcrs is unique for each shidied interruption duration as shown in Table 111. The values of h , p and u2 for any surveyed duration can be obtained through the normality transformation process presented in the previous section while the value of P, can be easily assessed from the original data. These parameters cannot be obtained in the same manner for non-studied durations bccausc cost data are not collected at these points. Regression analysis is used to predict the distribution parameters a t intermediate durations using known values at Ihc studied durations. In this approach, equations dcscribing the relationship between the studied duration and cach of the four parameters are obtained using the leastsquares method [15]. Given the best fitting equation for cach of the four relationships, a particular parameter at a non-studicd duration can be predicted by substituting the duration value into the respective equation. Once the parameters are known, the intermediate duration distribution can be easily dcveloped. The simplest way to portray the possible models which fit the relationships given in Table I11 is by using scatter diagrams as shown in Fig. 3. A logarithmic x axis is used in cach of Ihe scatter diagrams in order reduce the large variability in thc iiitcrruption duration and thcrcforc result in a more closely fitting equation. The regression procedure started with the straight line model and then proceeded to higher order polynomials until an R 2 of 90% or more was

Fig. 3. Regression results: Residential cost distribution parameters as a function of interruplion duration.

In order to illustrate the fitted curves, the parameters of the outage cost distribution corresponding to a n interruption duration of 1.5 hours are calculated by substituting the value of d in minutes in each equation as shown below:
h = -0.4101+0.1455~10g(90) = -0.1258 p = -11.125+45563.10g(90) = -2.2209
(LT

= 13.749 - 9.0039.log(90)
= 0.8721.[10g(90)]
-3.7122

+ 1.6507 *[log(9O)r = 2.4573


0.0725

P,

Thc value of P, indicates that 7.25% of residential customers have no costs associated with interruptions lasting 1.5 hours. The remaining 92.75% have outage costs that, when transformed, can be described by a normal distribution having a mean of -2.2209 a n d a variance of 2.4573. The actual outage cost corresponding to the mean value ot this distribution can be calculatcd using the following invcrsc transformation:
x = { (l+h.y)

log-l(y)

if: A + O if h = o

where y is the sampled transformed cost, x is the corresponding actual cost and h = -0.1258 is the power transformation factor. The result obtained using (12) is

1242

0.1309 $/kW. By contrast, the corresponding value calculated from the aggregated peak-normalized outage costs shown in Table I and Fig. 1 is 0.3292 $/kW. It is clear from this comparison that there is a large difference between the two values. Although, the mean value is sinaller than the aggregated one, it can be ascertained that the utilization of the whole distribution i n power system planning and operations studies will yield results that correspond to a higher overall outage cost [ll].

2. A.P. Sanghvi, 'Economic costs of electricity supply interruptions: US and foreign experience', Energy Economics, Vol 4, No 3, J u l y 1982, pp 180198. 3. R. Billinton, G. Wacker and E. Wojczynski, 'Comprehensive bibliography on electrical service interruption costs: 1980-1990'. IEEE Trnnsoctions on PowerApparatus and Systems, Vol 102, No 6, June 1983, pp 1831-1837.
4. G. Tolletson, R. Billinton and G. Wacker, 'Comprehensive bibliography on reliability worth and electric service customer interruption costs: 19801990', IEEE Transactions on Power Systems, Vol 6, No 4, November 1994, pp 1508-1514. 5 . G. Wacker, E. Wojczynski and R. Billinton, 'Interruption cmst methodology and results - A Canadian residential survey', IEEE Transactions on Power Apparatus and Systems, Vol 102. No 10, October 1983, pp 3385-3392. 6. E. Wojczynski, R. Billinton and G. Wacker, 'Interruption CQSI methodology and results - A Canadian commercial and sniall industry survey', IEEE Transactions on Power Apparatus and Systems, Vol 103, No 2, February 1984, pp 437-444. 7. G. Wacker and R. Billinton, 'Farm losses resulting from electric service interruptions - A Canadian survey', IEEE Tramactions on Power Systems, Vol4, No 2, May 1989, pp 472-478. 8. G. Tollefson, R. Billinton, G. Wacker, E. Chan and J. Aweya, 'A Canadian customer survey to assess power system reliability worth', IEEE Transactions on Power Systems, Vol 9, No 1, February 1994, pp 443450. 9.

VI. CONCLUSIONS
This paper illustrates two conceptually different techniques for modelling customer interruption costs. The conventional customer damage function approach uses aggregate values to define the overall monetary losses incurred by electrical users due to power failures. This method is relatively easy to develop and use to assess the reliability worth. The basic customer damage function cannot reflect the dispersed nature of interruption costs and therefore provides a limited interpretation of the entire customer outage cost data base. The interruption cost estimates for a given interruption duration display a significant degree of variability. This indicatcs that there is a need for a better representation of these costs. This paper presents a new approach, designated as the probability distribution technique, capable of recogni7ing the dispersed nature of the interruption cost diilil. Using this approach, thc cost d a t a are transformed into normal distributions which can be used to interpolate for the distributions of outage costs at intermediate durations. Once these distributions are developed, an inverse transformation procedure for converting the transformed costs back to their true values which can be used in a variety of reliability worth studies is also described i n this paper.

R. Billinton, J. Oteng-Adjei and R. Ciliajar, 'Comparison of two alternate methods to establish a n interrupted energy assesmen1 ratc', IEEE Transactions on Power Systems. Vol 2, No 3, August 1987, pp 751-757.

10. R. Billinton, G. Wacker and R. Suhramaniam, 'Factors affecting the development of a residential customer damage function', IEEE Transactions on Power Systems, Vol 2, No 1, February 1987, pp 204209. 11. R. Billinton, E. Chan and G. Wacker, 'Probability distribution approach to describe customer costs due to electric supply interruptions', IEE Proceddings on Generation, Transmission and Distribution, Vol 141, No. 6 , November 1994, pp. 594-598. 12. G.E.P. Box and D.R. Cox, 'An analysis of transformations', J.R. Statist. Soc., Vol26, No B, 1964, pp. 211-252. 13. R.B. D'Agostino and M.A. Stephens, Goodness-offit 7echrriqrres, Marcel Dekker Inc., New York, 1986. 14. R. Johnson, Elementary Statislics, Prindle Weber & Schmidt, Boston, 1984. 15. D.G. Kleinbaum and L.L. Kupper, Applied Regressioii Ana!vsis and Other Multivarinte Methods, Duxbury Press, North Sci tuate, Massachusetts, 1978.

It is believed that the probability distribution modelling technique proposed in this paper gives a better representation of the monetary losses incurred by customers due to power failures than the conventional customer damagc function approach. This technique, when utilized in a reliability worth assessment study, should provide a realistic and effective assessment of the losses incurred by electrical users due to power failures. The techniques described in this paper are illustrated using the 1991 residential cost of interruption data. VIZ. REFERENCES
I

VIII. BIOGRAPHIES
R Ghdar obtained a B.Sc. in 1983 from the University of Ottawa, an MSc.
in 1986 and a Ph.D. in 1993 both frorolh the University of Saskatchewan. Presently, he is a senior reliability engineer with Power Math Associates, Inc. His research area is the evaluation of power sysiem reliability.

R Billinton ('F 1978) obtained BSc. and M.Sc. degrees from the University of
Manitoba and Ph.D. and D.Sc. degrees from the University o f Saskatchewan. Presently, C.J. MacKenzie professor of electrical engineering and Associate Dean of graduate studies, research and extension of the college of engineering. Dr. Billinton has authored and co-authored over 400 papers and 7 books on power system analysis, stability, economic system operation and reliability.

R.N. Allan, R. Billinton, A.M. Breipohl and C.H. Grigg, 'Bibliography on the application of probability methods in power system reliability evaluation: 1987-1991'. IEEE Transactions on Power Systems, Vol9, No 1, Fcbruary 1994, pp 41-49.

E. Chan ohtained both B.Sc. and M.Sc. degrees from thc Univcisity of Saskatchewan. Her research area is tn the application of outage costs to power
system reliability.

1243
DISCUSSION Lambert Pierrat, Senior Member, IEEE (General Technical Division, Electricit6 de France, 37 rue Diderot, 38040 Grenoble Cedex).Authors show that for the residential sector, aggregated costs can not account for the large data variability. This is why they propose to substitute to these deterministic costs a statistical model, because in the area of planning and operation of power systems, the reliability has to take into account the uncertainty. In this discussion, I wish, firstly to indicate some misprints then to propose an improvement of the statistical model, finally to present some arguments for the choice of an adequate characteristic cost. 1. Misprints and various remarks: The first point concerns some typographic errors. By comparing 1 data of the table 1 1and graphs of the figure 3, the last line of the table III, one sees that: h = +0.0238 and p = 2.8289 (?). The second point concerns the average cost value for d=lShr. The average value of sample (0.55$ / kW) is clearly superior the average value of the model (0.13$ / kW). 1.1 The model being identified from data of the table 11, these average values would have to be identical. What is the reason of this divergence? 1.2 The model being based on a nonlinear transformation (equ. 7), the average value not identifies necessarily to the parameter (p)of the normal distribution. This value called average is to my opinion closer to the median ? 2. Critical analysis of the statistical model: The aim of authors is to fit a continuous statistical model, on data what allows to interpret results in the observation range (1/3hr to 24hr). But as the identification is based on 5 independent samples, this lead to a transformed model (normal law) whose parameters are issued from polynomial regressions enough complicated. This formulation does not allow to release easily structural relationships between parameters and the duration interruption (d). In addition, one can see on the figure 3, that some atypical values are difficult for z. take in account (h,(J and P ) This is why I propose to improve the model by simplifying it and by strengthening its internal coherence. My viewpoint tend towards two essential observations. The first concerns the determinist and practically invariant relationship between the duration of interruption (d) and all parameters of the table I (aver., aggr. cons., aggr. peak) as well as 2 main parameters of the table I1 (mean, stdt. dev. ).This non linear relationship visible on figures 1 and 2 can be written: ln(pi) = 3/21n(d).The second concerns the strongly asymmetricalempirical distribution of the table 1 : its coefficient of variation ( 5 5 ) and its skewness coefficient 1 (sk=20) are enough little dispersed. It results some that one can validly choose a lognormal distribution, having an invariant parameter and another varying nonlinearly with (d). From alone available information in the paper, a first approximation of pdf is the following: f(x)=[1-Pz(d)]LN[p(d);o] with : Pz(d)=O.l/d; p(d)=p(1)+3/2ln(d) ;p(1)=-8/3 : 0=7/4. In fact, this simplified model corresponds to that authors, in the case where h=O (equations 7 and 12). It permits the fitting of data and has a good structural sturdiness. Contrarily to the model of authors, it does not respect quantitatively the asymmetry of empirical distribution, but this is not a lot importance, because one does not interest explicitly to raised cost having a very weak occurrence. 2.1 What is the opinion authors concerning the genesis of this simplified model and the comparison of results provided by these two models? 2.2 The validity of a statistical model depends on its structure and its parameters. That is possible to estimate the confidence interval of the simplified model parameters.Since model proposed by authors is identified locally with 5 samples, what guarantees the uncertainty continuity on the utilisation range. Do can authors bring some precision on this point ? 2.3 In this area, one has generally the choice between statistically accurate model, and simpler model therefore less accurate but structurally more coherent. What is the opinion of authors concerning these two approaches and compromises that they imply? 3. Choice of a characteristic value of the cost: Authors have clearly shown that the aggregated cost notion is not satisfactory. A statistical model allows to estimate coherent characteristic indicators between them and correspondingto a certain level of probability. The application of the simplified model to the

case exposed in V-2 (interpolation for d=lShr) allows to observe analytically next values : median=O.l2$/kW; aggr.=0.33$/kW; mean=0.55$ / kW. (these indicators are in a constant ratio, independently of the duration). The median value (0.12$ / kW) can be compared to that calculate by authors (0.13$ / kW). The small difference between these numerically translated values is a measure of local accuracy of the simplified model. The hierarchy of costs shows that the aggregated value is included between the median and the average but this property is insufficient to define a valuable indicator. The average value is obviously the greatest, because it gives an equal weight to all observed values, therefore to very raised and unlikely values. The median value corresponds to the quantile of order p=1/2, this is to said that it separates the distribution into two parts of equal probability. In addition, for the lognormal distribution, the median corresponds to the average value of the transformed normal distribution. This is why I propose a characteristic cost indicator based on the median value of the distribution. 3.1 In theory, the statistical distribution allows to connect explicitly a cost at its of probability level. In practice, standardized indicators are sufficient. Do authors agree with the choice of an indicator based on the median value? I thank authors for their valuable contribution and for replies and comments evoked in this discussion. Manuscript received August 22, 1995.

R. Ghajar, R. Billinton, E. Chan: We would like to thank Mr. Pierrat for his
comments. We agree with him that there are two typographical errors on the last line in Table 111. The values should read h=0.0238 and p.32.8289 as shown in Figure 3. The arithmetic mean of the basic data is higher than the arithmetic mean of the transformed data. There is no suggestion in the paper that it is superior in any way. It is simply different and is obviously less influenced by the extreme values. There is no reason, however, when t h e original distribution is highly skewed that these two values should be the same. In the case of a normal distribution, the mean and the median are the same. If the transformed data generates a normal distribution, then the median of the transformed data and the mean will be identical. We believe, however, that is better to refer to this as the arithmetic mean of the transformed non-zero outage cost data. It should be recalled that the zero cost outage data is not included in the determination of the continuous distribution. Mr. Pierrat correctly identifies t h a t the analysis is based on five samples of outage duration related costs. His suggestion t o simplify the model is worthy of consideration. This, however, does not necessarily strengthen internal coherence. Mr. Pierrat has decided that the lognormal distribution

1244

can be selected as the underlying distribution a n d used t o provide a simplified representation. In this case, it appears t o provide a reasonable approximation. It should be noted, however, that there are no h=O values in Table I11 and therefore the lognormal distribution is not a good fit at the actual studied durations. We would prefer to analyze the actual data and use the derived distributions rather than implicitly assume that the data follows a lognormal distribution with an empirical base. We would also prefer to remove the zero cost outage data from the analysis prior tQdetermining a n appropriate distribution. We also do not agree with the point used by Mr. Pierrat that "one does not interest expliehly to raised cost having a very weak occurrence". The objective of using a distribution approach is t o recognize the diversity of outage costs, which includes the zero values and also the higher values with relatively lower frequencies of occurrence. We do not believe t h a t "good structural sturdiness" can be attained by disregarding

that data which does not fit into the assumed empirical representation.

As suggested by Mr. Pierrat, the arithmetic mlean determined from the transformed nonzero outage cost data can be considered as one more indicator. Its most important use, however, is as one of the parameters required to describe the distribution of outage costs at the specified duration. Reference 1 of the 1 paper clearly shows that incorporation of the entire dispersion in customer outage costs in a n evaluation of system outage costs has a considerable impact on the assessment. The resulting value is much higher than that obtained using either the aggregate value, the sample mean value or the arithmetic mean of the transformed non-zero outage costs. In conclusion, we would like to thank Mr. Pierrat for his comments and for his obvious interest in our work. We will consider his proposed empirical model in connection with data collected for other customer sectors.
Manuscript received November 2, 1995.

You might also like