You are on page 1of 68

CHAPTER 1

INTRODUCTION

1.1 RESCUE 1122

Even with all our technology and the innovations that make modern life so much
accessible than it once was, it takes natural disaster to wipe all that away and remind us that
we're still at the mercy of nature. The limited power to deal with calamities and catastrophes in
our country was putting the lives of the citizens at risk. This was quite clear from the fact that
there were over 95% chances that in case of an emergency; the sufferer will not get an
ambulance for transportation, let alone proper emergency care by trained professionals.

It was evident from the history that emergency management has been long neglected in
Pakistan and there were no disaster response forces or trained emergency medical-technicians in
case of any emergency or a disaster. Moreover, emergency ambulance, rescue and trained fire
services were almost fictional which was clearly exposed during October, 2005 earthquake in
Pakistan. Thus, the citizens of Pakistan were deprived of even the basic right to appropriate
emergency care in case of natural calamities and catastrophes.

Therefore, the Punjab Emergency Service (Rescue 1122) started in 2004 after the success
of the Lahore Pilot Project. This service is obtained by calling 1122 from any phone. It was
accepted under the 2006 Punjab Emergency Service Act to provide management of disasters.

Rescue 1122 has now developed into the largest humanitarian service of Pakistan and also
providing encouragement & training to the staff of other provinces. Rescue 1122 has been able to
achieve confidence of people by rescuing millions of victims of emergencies through its
emergency ambulance, rescue & fire services in all 36 Districts of Punjab.

As a result of the performance during destructive disasters including the ruinous floods of
2010 the Provincial Disaster Management Authority has notified Punjab Emergency Service as
the Disaster Response Force and the Home Department, Government of the Punjab, has also
transferred the flood relief function along with material resources from Civil Defense to the
Punjab Emergency Service on 28th May, 2011. Rescue 1122 is also boosting its power for
emergency mobility and response in association with the National & Provincial Disaster
Management Authorities.
This is strongly reflected in the mission statement of RESCUE 1122 which is Development
of safer communities through establishment of an effective system for emergency preparedness,
response and prevention. After establishing the organization of essential emergency service
workforce, the ultimate vision of Rescue 1122 is prevention of emergencies through practical
public participation in saving lives and changing minds of the society so that we can establish
flexible & secure communities. In this regard, teams have already been constituted in every
district and many projects are in progress to establish protected Communities in Pakistan.

1.2

FUNCTIONS OF RESCUE 1122

1.2.1. Ambulance Service:

This is the most significant function of Rescue 1122 as over 97% emergency calls are
linked with Emergency Ambulance Service. This service rescued millions of victims of road
accidents, medical emergencies and fatalities while keeping standard in all Districts of
Punjab. The Punjab Emergency Service (Rescue 1122) was originally started as an
Emergency Ambulance Service on 14th October 2004 as a pilot project from Lahore. After
the achievement of this pilot project, Emergency Ambulance Service was started in 12 major
cities of Punjab and subsequently in all Districts of the Punjab province with a population of
over 80 million. In spite of the fact that Rescue & Fire Services were also established
consequently, over 97% emergency calls are still associated to Emergency Ambulance
Service. The main successors of this Service have been the injured party of road traffic
accidents whom earlier people were afraid to help due to medico-legal reasons.
It was for the first time that emergency medical technicians were trained for this emergency
ambulance service and emergency ambulances of international standards were manufactured
in Pakistan. This training and domestic fake of ambulances made the project cost effective
and worthwhile resulting in success of Rescue 1122.

1.2.2 Prevention of emergencies:


Prevention of emergencies is one of the key functions of Emergency Service (Rescue
1122) in accordance to Punjab Emergency Service Act, 2006 and an important contributing
feature towards the mission of the Service which is development of secure Communities through
establishment of a productive system for disaster preparedness and response. Community Safety
activities of Rescue 1122 ranges from community level mobility to the level policy making and

high level monitoring for application of protective measures. The main features of Community
Safety Activities include following.

1.2.3 Fire Safety & Prevention


Increasing number of fire emergencies required Rescue 1122 to work for interception and
security. Fire inspection was involved in this regard to trace out reasons behind fires so that
proper steps can be taken for fire prevention & safety. International standard training of fire
investigation was needed to mark this important subject which was putting lives and properties
of citizens at risk. In this regard, member Scottish Parliament played influential role in arranging
training of fire investigation & safety.

These trained Officers imparted training of fundamental principles of Fire Investigation


to District Emergency Officers, Emergency Officers, Lead Fire Rescuers and Fire Rescuers. As
result of said trainings, Fire investigation section has been developed in Rescue Service to
undertake Fire Investigation studies of all major fires and recommend measures for fire
prevention.

1.2.4 Road Safety:


Over 40% of Rescue 1122 successors are those who spoil in the Road Traffic Crashes.
Similarly most of the accidents and disasters are also being reported due to traffic crashes. The
study of traffic crashes shows that over 50% traffic accidents sufferers are having ages from 18
to 50 years and young earning bread winners of society are becoming victims of traffic crashes.

Deaths and inabilities to traffic accidents victims are therefore causing huge socio-economic
impact on society.

In order to minimize the number of traffic accidents, Rescue 1122 has started a number of
Read Safety actions which include regular date accumulation and analysis for prevention
purposes, Trauma Registry Program, Public Awareness campaign of safety messages and
advertising of accidents dairy to media workers on daily basis.

1.2.5 Community Emergency Response Teams (CERTs):


The Punjab Emergency Service, Rescue 1122 in accordance with section 5(g) of Punjab
Emergency Service Act, 2006 is in process of establishing Community Emergency Response
Teams (CERT) to help people making their habitants safe communities. CERT members have
been sharpened on their responsibilities to work for prevention of emergencies in their localities
and enhance their power to respond emergencies as First Responders. CERTs have also been
trained in Community Based Disaster Risk Management (CBDRM) so that in case of any
accidents, community responders can work for relief of Disastrous effects.

1.2.6 Community Action for Disaster Response (CADRE):


The Rescue 1122 Service is working in close coordination with international federations
to establish best practices at community level. Through the help of National Disaster
Management Authority (NDMA), intelligence of Emergency Services Academy has been trained
as coaches in Community Action for Disaster Response (CADRE) Program by Asian Disaster
Preparedness Center of Bangkok. The Service is also making effort for Community Based

Disaster Risk Management (CBDRM) in cooperation with United Nations & other international
organizations.

In our study we have recorded the response time for emergency calls and study the
parameter of its distribution in Bayesian paradigm. In addition to this, we have also studied the
different categories of calls received at Rescue 1122 using Bayesian approach.

VARIABLES

1. EMERGENCY CALLS
2. FAKE CALLS
3. WRONG CALLS
4. ABUSING CALLS
5. DISTORTED CALLS

1.3 Statistics:
Statistics is the collection of methods for planning experiments, obtaining
data, and then organizing, summarizing, presenting, analyzing interpreting and
drawing conclusions based on data.
Some basic concepts of Classical statistics and Bayesian statistics are given
below.

In classical statistics parameter is considered to be constant quantity.

o In Bayesian statistics the parameter acts as a random variable.

o Classical statistics deals only with current information.


o In Bayesian statistics previous information is called prior information.
o Current information is sample information which is denoted by P(x).
o Prior means those conditions which we know already.
o Prior distribution is denoted by P( ).
o In Bayesian statistics theta is parameter .We use previous information for finding value of
parameter theta.
o In Bayesian statistics parameter varies because of prior information.
o Previous

information

is

always

used

for

finding

posterior

distribution.

In Bayesian statistics parameter as a random variable have its own prior


o We may combine the distribution of both random variables current and prior and find
posterior distribution.
o Bayesian statistics is used for better estimation or for future planning.

1.4 BAYESIAN STATISTICS:


In Bayesian statistics the prior information about parameter is utilized with the current
sample information and the posterior information is obtained while in classical statistics only the
current sample information is utilized

Posterior = Prior * Current

Bayesian statistics provides a theory of inference which enables us to incooperate new


experimental facts and update the existing information. A Bayesian analysis uses the posterior
distribution to review the shape of our knowledge. The posterior distribution combines

information from the data nearby expressed through the likelihood function, with other
information described through the prior distribution.

To understand the concept of Bayesian statistics, some fundamental terminologies are


discussed in brief.

1.4.1 PRIOR PROBABILITY DISTRIBUTION:


The prior probability distribution of an uncertain parameter of interest is the probability
distribution relating the available knowledge or a person uncertainty about the value of that
parameter prior to the information being experimental. Sometimes, it does take place that the
prior distribution used for the Bayesian analysis is improper prior, in such a case the function
used as a prior probability density may integrate to infinity and is thus not, firmly speaking, a
probability density at all.

1.4.2 Non informative Priors:

Sometimes the prior information is uncertain as compared to the information enclosed in


the sample or sometimes simply no prior information is on hand. Such priors are known as noninformative priors. Non-informative prior means that the prior contains no information about the
parameter and the likelihood function mostly contains more information than the noninformative prior.

Research has continued from the past few years and the use of some other non informative priors
is also observed, for example, the priors by Bernardo (1979b), Ghosh and Mukerjee (1992) and
also the non informative prior by Tibshirani (1989). Another broadly used set of priors is by

Peers (1965) rediscovered by Stein (1985). According to Leonard (1990) no prior evaluation,
whether proper prior or improper prior represents prior ignorance. For example, an improper
uniform distribution on p-dimensional real space provides information that the parameter is
toward stretch out in either of two regions, if these hold the same hyper volume.

1.4.3 Non informative Uniform Prior:


We call uniform prior a non informative uniform prior because, it does not support any
possible value of the parameter over any other values; however, it is not invariant under reparameterization. Bayes (1763) and Laplace (1812) recommend the Bayesian analysis of the
unknown parameters using a uniform (possibly improper) prior. Non-informative was the
approach of inverse probability .The easiest requirement of shaping a non-informative prior is
that of uniform prior. In this condition the parameter space is measured to be a finite set of n
elements. The non-informative uniform prior requires for handing over uniform probability 1/n
to each element. We usually take uniform prior to be unity 1.

1.4.4 Non-informative Jeffreys prior


The basic idea behind non-informative Jeffreys prior is to have a prior that maximizes
the estimated information from the facts, Jeffreys (1961) proposes the non-informative prior
which remains invariant under any one-to-one parameterization. Bernardo (1979b) and Berger
and Bernardo (1989, 1992a, 1992b) have classified the Jeffreys prior as a stepwise reference
prior.

1.4.5 Reference Prior:


Reference priors are frequently the objective prior of choice when we have multivariate
problems, when it happens that Jeffreys priors may show difficult manners in a multi parameter
case. Bernardo (1979 b) introduces another analogous proposal of reference priors.. This
difficulty is solved by the use of reference priors which divide the parameter vector into
parameters of interest and the nuisance parameters according to their category of inferential
importance .We get reference prior by maximizing an asymptotic expansion of Lindleys
measure of information. When there are no nuisance parameters and definite regularity
conditions are satisfied, Bernards reference prior becomes the Jefferys prior.

1.4.6 The Posterior Distribution:


Under the Bayesian approach, prior distribution of the parameters is combined with
sample information to create updated or posterior distribution about the parameters.
The posterior distribution P (

is defined as:

P(

where

is p1 vector and f(x|

(
(

) (
) (

) is the sample information written in the form of

sample density and P( ) is the prior density. It is to be noted that the output of the Bayesian
analysis is not a single estimate of the parameter, but rather the entire posterior distribution
which summarizes all the required information about the parameter.

1.5 OBJECTIVES

To study the time variable needed to respond the emergency calls received at Rescue
1122 centre in Multan city using Bayesian approach.

To study the categories of calls received at Rescue 1122 centre in Multan city using
Bayesian approach.

1.6 THESIS OUTLINE:


Chapter 1 consists of introduction of Rescue 1122 and some basic concepts of Bayesian
statistics and terminologies which are commonly used for Bayesian analysis.
Chapter 2 consists of literature review regarding the Bayesian analysis of exponential
distribution, multinomial distribution, gamma distribution and Dirichlet distribution is presented.
Chapter 3 consists of methods and materials used in this research.
Chapter 4 consists of analysis which presents explanation of whole research work.

CHAPTER 2
LITERATURE REVIEW
The structure of a Bayesian network represents a set of conditional independence
relations that hold in the domain. Learning the structure of the Bayesian network model that
represents a domain can reveal insights into its underlying causal structure. Moreover, it can
also be used for prediction of quantities that are difficult, expensive, or unethical to measure.
such as the probability of lung cancer for example based on other quantities that are easier to
obtain. The contributions of this thesis include

An algorithm for determining the structure of


a Bayesian network model from statistical independence statements;

A statistical independence test for continuous variables; and

A practical application of structure learning to a decision support problem, where a model


learned from the database most importantly its structure is used in lieu of the database to
yield fast approximate answers to count queries, surpassing in certain aspects other stateof-the-art approaches to the same problem.
Thrun (2003), address the important problem of the determination of the structure
of directed statistical models, with the widely used class of Bayesian network models as a
concrete vehicleof my ideas.
Cohen (2005), concerned with Bayesian framework for tackling the supervised

clustering problem, the generic prob- lem encountered in tasks such as reference matching, co
reference resolution, identity uncertainty and record linkage. Their clustering model is based on
the Dirichlet process prior, which enables them to dene distributions over the countably innite

sets that naturally arise in this problem. They add supervision to their model by positing the
existence of a set of unobserved random variables that are generic across all clusters. Inference in
their framework, which requires integrating over innitely many parameters, is solved using
Markov chain Monte Carlo techniques. They present algorithms for both conjugate and nonconjugate priors. They present a simplebut generalparameterization of our model based on a
Gaussian assumption. They evaluate this model on one articial task and three real-world tasks,
comparing it against both unsupervised and state-of-the-art supervised algorithms.
Feroze et.al (2012), concerned with posterior analysis of exponentiated gamma
distribution for type II censored samples. The expressions for Bayes estimators and associated
risks have been derived under different priors. The entropy and quadratic loss functions have
been assumed for estimation. The posterior predictive distributions have been obtained and
corresponding intervals have been constructed. The study aims to find out a suitable estimator of
the parameter of the distribution. The findings of the study suggest that the performance of
estimators under gamma prior using entropy loss function is the best. Five informative and noninformative priors have been assumed under two loss functions for the posterior analysis. The
performance of the different estimators has been evaluated under a detailed simulation study.
The study proposed that in order estimate the said parameter, the use of gamma prior under
entropy loss function can be preferred.

CHAPTER 3
Methods and Materials:
In our study we have recorded the response time for emergency calls and study the
parameter of this distribution in Bayesian paradigm. In addition to this, we have also studied the
different categories of calls received at Rescue 1122 using Bayesian approach.

3.1 Exponential Distribution:


How much time will an emergency ambulance service take to respond in case of any
disaster? How much time will be needed to wait for a bus on bus stop? How long students need
to wait before a teacher comes in the class room?
We can answer these questions in probabilistic terms using the exponential distribution.
If waiting time is unknown, we take it as a random variable having an exponential distribution. If
the probability of occurrence of an event during a certain time period is proportional to the
length of that time period then the time we need to wait has an exponential distribution .To
model waiting times exponential distribution is generally used in many practical situations .
There is relation between exponential distribution and Poisson distribution. This time is
independent of previous occurrences; the amount of occurrences of an event within a given unit
of time has a Poisson distribution.

3.2 Characteristics of an exponential distribution:


The exponential distribution has following characteristics.

Let X is a continuous random variable. Let a set of values that a random variable can take
is a set of positive real numbers
= [0, )
Let X has an exponential distribution with parameter

then its probability density

function is equal to
f (x) =
A random variable having an exponential distribution is also called an exponential
random variable. The mean of an exponential random variable X is
E(X) = . The variance of an exponential random variable X is V(X) =

3.3 Bayesian inference:


A method of inference which is used to update the probability estimate for a hypothesis is
known as Bayesian Inference. Bayesian inference is one of two powerful approaches in statistical
inference. It was Reverend Thomas Bayes (1763) who gave the Bayes rule which is used for
updating probability estimates while this rule is considered to be the foundation of Bayesian
inference. Bayesian inference is a modern renewel of the classical definition of probability,
associated with Pierre-Simon Laplace.In

frequentist approach definition of probability is

associated with R. A. Fisher.

The greatest revolution in statistics begins with Bayesian inference. As a statistical method
Bayesian derivation ensures that this method works as best as compare to any other method. To
estimate the posterior probabilities Bayesian inference uses prior probabilities. In Bayesian

inference

is parameter .We use previous information for finding value of parameter

Bayesian inference parameter

.In

varies because of prior information. In Bayesian inference the

parameter acts as a random variable. Bayesian statistics parameter

is a random variable has its

own prior ( . Bayesian inference is used in different fields including medicine, science,
engineering, philosophy and law. Bayesian inference is used for better estimation or for future
planning. Bayesian inference empowers us in solving most difficult problems.

We use following topics of Bayesian inference in our research.

Bayes Theoram

Likelihood

Log Likelihood

Prior Probabilities

Classes of prior probabilities

Uniform prior

Informative prior

Jeffreys prior

Posterior Distribution

Hyper parameters

3.3.1 Bayes theoram:


Bayes' theorem is considered as a foundation of Bayesian inference.Bayes' theorem
shows the relation between conditional probabilities of two events A and B that are the opposite
of each other. This theorem is intoduced by Reverend Thomas Bayes (1702-1761).This theoram

is

also known as Bayes' law or Bayes' rule. Bayes' theorem tell us that the conditional

probability, or 'posterior probability', of an event A after B is observed in terms of the 'prior


probability' of A, prior probability of B, and the conditional probability of B given A, denoted B |
A.

In all common interpretations of probability Bayes' theorem can be used.Bayes' theorem


provides an explanation for the conditional probability of A given B, which is equal to

(A|B)=

In bayesian inference we replaced an event B with the observation y where event A is replaced
with parameter

and probabilities

with densities p.Then the term

in denominator

vanish and relation change from equal to 'proportional to' .

p(

3.3.2 Prior Distribution:


The prior probability distribution of an uncertain parameter of interest is the probability
distribution relating the available knowledge or a person uncertainty about the value of that
parameter prior to the information being experimental. Sometimes, it does take place that the
prior distribution used for the Bayesian analysis is improper prior, in such a case the function
used as a prior probability density may integrate to infinity and is thus not, firmly speaking, a
probability density at all.

In this research we use gammma distribution as prior distribution while using exponential
distribution as current distribution to find posterior distribution.

P( )=

Hyper parameters :
In Bayesian statistics, parameters of prior distribution are called hyper parameters. Hyper
parameters are used to differentiate them from parameters of model.

For example in checking response time using Bayesian approach we are using Gamma
distribution to model the exponential distribution of parameter

then

( )=

is a parameter of exponential distribution (underlying system).a and b are parameters


of prior distribution (gamma distribution ).

3.3.3 Posterior distribution


Under the Bayesian approach, prior distribution of the parameters is combined with sample
information to create updated or posterior distribution about the parameters.

The posterior distribution P (

is defined as:

(
(

)
)

(
(

We may combine the distribution of both random variables current and prior and find
posterior distribution. In Bayesian statistics

is parameter .We use previous information for

finding value of parameter. In Bayesian statistics parameter

varies because of prior

information.Previous information is always used for finding posterior distribution.

3.3.4 Likelihood:
Likelihood function of n random variables
density of these n random variables. Say (
of .In particular if

is defined to be joint
;

ich is considered to be function

is a random sample from a density f(x,

likelihood function is to be defined as


(

= (

The likelihood function can also be defined as L(

Priors for exponential distribution:


Uniform

prior

Informative

prior

Jeffrey prior

; )

then the

3.3.5 Uniform prior


We call uniform prior a non informative uniform prior because, it does not support any
possible value of the parameter over any other values; however, it is not invariant under reparameterization. Bayes (1763) and Laplace (1812) recommend the Bayesian analysis of the
unknown parameters using a uniform (possibly improper) prior. Non-informative was the
approach of inverse probability. The easiest requirement of shaping a non-informative prior is
that of uniform prior. In this condition the parameter space is measured to be a finite set of n
elements. The non-informative uniform prior requires for handing over uniform probability 1/n
to each element. We usually take uniform prior to be unity 1.
(

(
(
( | )

P(

P( | )

P( | )

P( | )

P( | )

Let
=b+

P(

P(

P(

It can be seen that the posterior parameters which we have attain from posterior
distribution are updated parameters by the hyper parameters of prior distribution. So the
information has been improved.

3.3.6 Using Bayesian Analysis using Informative prior:


(

f(

(
(

P(x|

P( )=

P( | )=

P(

P(

P( | )

P( | )

P( | )

P( | )

Let
=b+

P(

P(

P(

3.3.7 Using Bayesian analysis Jeffreys Prior :


The Jeffreys Prior can be computed as
(

( | )

The Probability Density Function of exponential distribution with parameter


(

i=1, 2, 3

Likelihood of exponential distribution


( | )=

The log-likelihood of Poisson distribution

is

Log [ ( | )] =Log[

Log [ ( | )]

] ------ (1)

Now differentiating (1) with respect to


( | )

( )

Now differentiating (1) again with respect to

( | )

( | )

-0

( | )

Appling Expectation as

[ ( | )]

[ ( | )]

Jeffreys Prior is

3.3.8 Posterior Distribution using Jeffery prior:


According to Bayes theorem

( | )

( | )

(
f(x|
( | )=

( | )=

(1)

According to resulting equation we can easily judge that it is a GAMMA ( , ) where


=n
and
=

When denominator will be integrated over interval 0 to it will gave us the Normalized
constant so that the incomplete PDF of GAMMA ( , ) become complete.

3.3.9 Method of maximum likelihood:


Thus the Bayesian estimator coincides with the maximum-likelihood estimator for a
uniform prior distribution P (

Uniform prior

Informative prior

Jefferys prior

for

For Uniform prior :


The probability density function of the exponential distribution with parameter is
(

Likelihood of exponential distribution


( | )=

i=1, 2, 3

The log-likelihood of exponentia distribution


Log [ ( | )] =Log[

Log [f(x| )]

ni 1 xi

nlog

] ------ (1)

Now differentiating (1) with respect to


n

Log f(x| )

[nlog

xi ]
i 1

Log f(x| )

n( )

xi 1
i 1

Log f(x| )

xi
i 1

ni 1 xi = 0
n

xi

ni 1 xi

i 1

3.4 Methodology of analyzing different categories of calls of Rescue


1122 data:
Now we analyze five categories of calls received at Rescue 1122 using Bayesian approach.

1. EMERGENCY CALLS
2. FAKE CALLS
3. WRONG CALLS
4. ABUSING CALLS
5. DISTORTED CALLS

3.4.1 Multinomial Distribution:


The multinomial distribution is a generalization of the binomial distribution. For n
independent trials each of which leads to a success for exactly one of k categories, with each
category having a given fixed success probability, the multinomial distribution gives the
probability of any particular combination of numbers of successes for the various categories.
The binomial distribution is the probability distribution of the number of successes for
one of just two categories in n independent Bernoulli trials, with the same probability of success
on each trial. In a multinomial distribution, the analog of the Bernoulli distribution is the
categorical distribution, where each trial results in exactly one of some fixed finite number k
possible outcomes. The binomial trial experiment becomes a multinomial experiment if we let
each trial have more than 2 possible outcomes. For example, the drawing of a card from a deck
with replacement is also a multinomial experiment if the four suits are the outcomes of interest.
Recording of accidents at a certain intersection according to the day of the week constitute
multinomial experiments.

3.4.2 Using Dirichlet distribution as prior:


The Dirichlet distribution is the prior distribution of the categorical distribution (a generic
discrete probability distribution with a given number of possible outcomes) and multinomial
distribution (the distribution over observed counts of each possible category in a set of
categorically distributed observations). This means that if a data point has either a categorical or
multinomial distribution or the prior distribution of the data point's parameter (the vector of
probabilities that generates the data point) is distributed as a Dirichlet, and then the posterior
distribution of the parameter is also a Dirichlet. Intuitively, in such a case, starting from what we
know about the parameter prior to observing the data point, we then can update our knowledge
based on the data point and end up with a new distribution of the same form as the old one. This
means that we can successively update our knowledge of a parameter by incorporating new
observations one at a time, without running into mathematical difficulties

CHAPTER 4
ANALYSIS
In this chapter we discuss results of our whole research work which we conclude after
analyzing the first emergency service Rescue 1122 data using Bayesian paradigm. The purpose
of Bayesian analysis is to revise and update the initial assessment of the event probabilities
generated by alternative solutions. This is achieved by the use of additional information. The
essence of Bayesian methods consists in identifying our prior beliefs about what results are
likely, and then updating those according to the data we collect. Intuitively, the updating process
will more readily accept estimates consistent with the prior (if we believe that a defined rate is perfectly
likely, our posterior will move there readily after a small number of observations), but will require more
data to accept estimates that are less probable according to the prior. This is very straightforward, we start
with a prior belief, and then update it in line with the incoming data. As the data start coming in, we start
updating our beliefs. If the incoming data points to an improvement in the conversion rate, we start
moving our estimate of the effect from the prior upwards; the more data we collect, the more confident
we are in it. The end result is what is called the posteriora probability distribution describing the likely
effect of treatment incorporating both the previous and current information.

We prefer Bayesian paradigm for analyzing Rescue 1122 data for two main reasons. Firstly in
Bayesian analysis our end result is a probability distribution, rather than a point estimate. Instead of
having to think in terms of p-values, we can think directly in terms of the distribution of possible effects
of our treatment. For example, if only 2% of the values of the posterior distribution lie below 0.05, we
have 98% confidence that the conversion rate is above 0.05; if 70% of the values lie above 0.1, we have

70% confidence that the conversion rate is above 0.1. This makes it much easier to understand and
communicate the results of the analysis. In this research work, we have recorded the response time for
emergency calls and study the parameter of its distribution in Bayesian paradigm.

4.1 Analysis of response time of emergency calls:

2013

2012

2011

Year

Month
Jan
Feb
Mar
Apr
May
Jun
Jul
Aug
Sep
Oct
Nov
Dec
Jan
Feb
Mar
Apr
May
Jun
Jul
Aug
Sep
Oct
Nov
Dec
Jan
Feb
Mar
Apr
May
Jun
Jul
Aug
Sep

Average Response Time(M.S)


6.8
6.7
7.2
6.7
6.5
7.1
7.1
7
7.3
6.9
6.9
6.8
6.2
6.4
5.6
6.6
6.8
7.2
5.8
9.1
6.0
6.95
6.0
5.8
8.2
7.9
7.9
7.7
6.4
6.1
6.1
6.4
7.9

Using above data set of response time of emergency calls of three years (2011-2013), we
calculate posterior mean, variance, standard deviation and mode with the help of statistical
computing packages SAS and Wolfram Mathematica. In this research work, we use three classes
of prior to analyze average response time of emergency calls.
1. Uniform prior
2. Jeffery prior
3. Informative prior
We compare posterior means using uniform prior with those obtained using Jefferys prior
and Informative prior as well as variances, standard deviation and modes obtained using all the
priors. Following results we obtained after computing mean, variances, standard deviation by
using statistical computing package SAS.
The Posterior Estimates using uniform prior SAS output using Appendix A:
N
33

T
226.05

Mean
0.15041

Variance
0.000665358

SD
0.02579

The Posterior Estimates using Jefferys prior SAS output using Appendix C:
N
33

T
226.05

Mean
0.14599

Variance
0.000645651

SD
0.025409

For Informative prior SAS output using Appendix B:


N
33

T
226.05

Mean
0.12407

Variance
0.000466243

SD
0.021592

4.2 Conclusions of time respond analysis:


As it is clear from the following results posterior means using all the priors are almost
similar. The posterior modes are also almost identical. The variance and standard deviation
obtained using informative prior are the minimum while, those obtained using uniform prior are
the maximum. This shows that the estimates obtained using informative prior are the best.
Informative prior proves to be better prior than the non informative priors. However, the results
obtained using both the non informative priors are almost identical so any one of them can be
used as non informative prior.
The results of uniform prior, Jeffery prior and Informative prior of response time are
given in the following table.
PRIOR

Mean

Variance

Uniform prior
Jeffery prior
Informative prior

0.15041
0.14599
0.12407

0.000665358
0.000645651
0.000466243

Standard
deviation
0.025794
0.025409
0.021592

Mode
0.15
0.14
0.12

Graph of the Jefferys prior using parameter emergency calls using


Appendix G2 :

I NT E G
0. 023
0. 022
0. 021
0. 020
0. 019
0. 018
0. 017
0. 016
0. 015
0. 014
0. 013
0. 012
0. 011
0. 010
0. 009
0. 008
0. 007
0. 006
0. 005
0. 004
0. 003
0. 002
0. 001
0. 000

0. 01 0. 02 0. 03 0. 04 0. 05 0. 06 0. 07 0. 08 0. 09 0. 10 0. 11 0. 12 0. 13 0. 14 0. 15 0. 16 0. 17 0. 18 0. 19 0. 20 0. 21 0. 22 0. 23 0. 24 0. 25 0. 26 0. 27 0. 28 0. 29

Graph of the uniform prior using parameter emergency calls using


Appendix A1:

I NT EG
0. 024
0. 023
0. 022
0. 021
0. 020
0. 019
0. 018
0. 017
0. 016
0. 015
0. 014
0. 013
0. 012
0. 011
0. 010
0. 009
0. 008
0. 007
0. 006
0. 005
0. 004
0. 003
0. 002
0. 001
0. 000

0. 00 0. 02 0. 04 0. 06 0. 08 0. 10 0. 12 0. 14 0. 16 0. 18 0. 20 0. 22 0. 24 0. 26 0. 28 0. 30 0. 32 0. 34 0. 36 0. 38 0. 40 0. 42 0. 44 0. 46 0. 48 0. 50

Graph of the informative prior using parameter emergency calls


using Appendix C2 :

I NT EG
0. 023
0. 022
0. 021
0. 020
0. 019
0. 018
0. 017
0. 016
0. 015
0. 014
0. 013
0. 012
0. 011
0. 010
0. 009
0. 008
0. 007
0. 006
0. 005
0. 004
0. 003
0. 002
0. 001
0. 000

0. 01 0. 02 0. 03 0. 04 0. 05 0. 06 0. 07 0. 08 0. 09 0. 10 0. 11 0. 12 0. 13 0. 14 0. 15 0. 16 0. 17 0. 18 0. 19 0. 20 0. 21 0. 22 0. 23 0. 24 0. 25

4.3 Now we analyze five categories of calls received at Rescue 1122


using Bayesian approach:
1. EMERGENCY CALLS

2. FAKE CALLS
3. WRONG CALLS
4. ABUSING CALLS
5. DISTORTED CALLS

4.3.1 Analysis of the Categories of Calls using Uniform Prior:


The selected variables follow multinomial distribution. In a multinomial distribution, the
analog of the Bernoulli distribution is the categorical distribution, where each trial results in
exactly one of some fixed finite number k possible outcomes. With the help of statistical
computing packages SAS following results of posterior estimates are obtained are obtained.

Results of posterior estimates of Multinomial distribution as Uniform


prior:
Categories of calls
Emergency calls
Fake calls
Wrong calls
Abusing calls
Distorted calls

Mean
0.46300
0.050008
0.38678
0.050066
0.050132

Variance
0.01146775
0.0043421
0.001145369
0.000003330
0.000006629

Standard deviation
0.1070875
0.065878
0.0338433
0.001824
0.0025746

Mode
0.54
0.64
0.06
0.07
0.06

4.3.2 Analysis of the Categories of Calls using Dirichlet Prior:


As we have discussed that the variables follow multinomial distribution. Depending upon their
support of the parameter, we take Dirichlet distribution as the prior distribution of the parameter. With the
help of statistical computing packages SAS following results of posterior estimates are obtained.

Results of posterior estimates of Dirichlet distribution as Informative


prior:
Categories of calls
Emergency calls
Fake calls
Wrong calls
Abusing calls
Distorted calls

Mean
0.43142
0.095465
0.37310
0.0500037
0.05006

Variance
0.000923646
0.00264119
0.000905999
0.000000203
0.000000377

Standard deviation
0.03039
0.05139
0.03009
0.00045
0.00061

Mode
0.45
0.55
0.04
0.05
0.05

4.3.3 Comparison of results of posterior estimates of Uniform prior and


Informative prior of emergency calls:
Posterior estimates
Mean
Variance
Standard deviation
Mode

Uniform prior
0.46300
0.01146775
0.1070875
0.54

Informative prior
0.43142
0.000923646
0.03039
0.45

In above analysis we consider emergency calls as a variable and obtained following results. It is
clear from the results that posterior estimates mean, variance, standard deviation and mode of uniform

prior is greater than mean, variance, standard deviation and mode of informative prior. The condition of
Bayesian analysis is fulfilled that posterior estimates mean and variance, standard deviation and mode of
informative prior is less than posterior estimates mean, variance, standard deviation and mode of uniform
prior.

4.3.4 Comparison of results of posterior estimates of Uniform prior and


Informative prior of Fake calls:
Posterior estimates
Mean
Variance
Standard deviation
Mode

Uniform prior
0.050008
0.0043421
0.065878
0.64

Informative prior
0.095465
0.00264119
0.05139
0.55

In above analysis we consider fake calls as a variable and obtained following results. It is clear
from the results that posterior estimates mean, variance, standard deviation and mode of uniform prior is
greater than mean, variance, standard deviation and mode of informative prior. The condition of Bayesian
analysis is fulfilled that posterior estimates mean and variance, standard deviation and mode of
informative prior is less than posterior estimates mean, variance, standard deviation and mode of uniform
prior.

4.3.5 Comparison of results of posterior estimates of Uniform prior and


Informative prior of Wrong calls:
Posterior estimates
Mean
Variance
Standard deviation
Mode

Uniform prior
0.38678
0.001145369
0.0338433
0.06

Informative prior
0.37310
0.000905999
0.03009
0.04

In above analysis we consider wrong calls as a variable and obtained following results. It is clear from the
results that posterior estimates mean, variance, standard deviation and mode of uniform prior is greater
than mean, variance, standard deviation and mode of informative prior. The condition of Bayesian
analysis is fulfilled that posterior estimates mean and variance, standard deviation and mode of
informative prior is less than posterior estimates mean, variance, standard deviation and mode of uniform
prior.

4.3.6 Comparison of results of posterior estimates of Uniform prior


and Informative prior of Abusing calls:
Posterior estimates
Mean
Variance
Standard deviation
Mode

Uniform prior
0.050066
0.01146775
0.001824
0.07

Informative prior
0.0500037
0.000923646
0.00045
0.05

In above analysis we consider abusing calls as a variable and obtained following results.
It is clear from the results that posterior estimates mean, variance, standard deviation and mode
of uniform prior is greater than mean, variance, standard deviation and mode of informative
prior. The condition of Bayesian analysis is fulfilled that posterior estimates mean and variance,

standard deviation and mode of informative prior is less than posterior estimates mean, variance,
standard deviation and mode of uniform prior.

4.3.7 Comparison of results of posterior estimates of Uniform prior


and Informative prior of Distorted calls:
Posterior estimates
Mean
Variance
Standard deviation
Mode

Uniform prior
0.050132
0.000006629
0.0025746
0.06

Informative prior
0.05006
0.000000377
0.00061
0.05

In above analysis we consider distorted calls as a variable and obtained following results.
It is clear from the results that posterior estimates mean, variance, standard deviation and mode
of uniform prior is greater than mean, variance, standard deviation and mode of informative prior
.The condition of Bayesian analysis is fulfilled that posterior estimates mean and variance,
standard deviation and mode of informative prior is less than posterior estimates mean, variance,
standard deviation and mode of uniform prior.

Conclusion:
After analyzing the above data set of response time of emergency calls of three years

(2011-2013) by using Bayesian paradigm we conclude that informative prior is best than uniform
prior and Jeffery prior because informative prior has least variance as comparison to Jeffery prior

and uniform prior. Then we analyze five categories of calls received at Rescue 1122 using Bayesian
approach. Five categories of calls
1. Emergency calls
2. Fake calls
3. Wrong calls
4. Abusing calls
5. Distorted calls

And conclude that emergency calls has greater mean than fake calls, wrong calls, abusing calls
and distorted calls. As it is clear from the following results posterior means using all the priors are

almost similar. The posterior modes are also almost identical. The variance and standard
deviation obtained using informative prior are the minimum while, those obtained using uniform
prior are the maximum. This shows that the estimates obtained using informative prior are the
best. Informative prior proves to be better prior than the non informative priors. However, the
results obtained using both the non informative priors are almost identical so any one of them
can be used as non informative prior.

Appendix Tables
Appendix A
Mean and variance using uniform prior exponential
DATA DD;
INPUT X1 X2 X3 X4 X5 X6 X7 X8 X9 X10 X11 X12 X13 X14 X15 X16 X17
X18 X19 X20 X21 X22 X23 X24 X25 X26 X27 X28 X29 X30 X31 X32 X33 DL;
CARDS;
6.8 6.7 7.2 6.7 6.5 7.1 7.1 7.0 7.3 6.9 6.9 6.8 6.2 6.4 5.6 6.6 6.8 7.2
5.8 9.1
6.0 6.95 6 5.8 8.2 7.9 7.9 7.7 6.4 6.1 6.1 6.4 7.9 0.01
;
DATA CC; SET DD;
T=x1+x2+x3+x4+x5+x6+x7+x8+x9+x10+x11+x12+x13+x14+x15+x16+x17+x18+x19+x20+
x21+x22+x23+x24+x25+x26+x27+x28+x29+x30++x31+x32+x33;
DO U=DL TO 500-DL BY DL;
INTEG=0;
FUN=U*7.8656E-44**-1*(U**33)*EXP(T*-U);
FUN2=DL**1*FUN;
INTEG+FUN2;
OUTPUT;
END;
*PROC PRINT DATA=CC;RUN;DATA DT1; SET CC; RUN;
PROC SORT DATA=DT1 OUT=DT1_SORT; BY INTEG; RUN;
PROC PRINT; VAR U INTEG; RUN;

Appendix A1
Mode using uniform prior for exponential
DATA DD;
INPUT X1 X2 X3 X4 X5 X6 X7 X8 X9 X10 X11 X12 X13 X14 X15 X16 X17
X18 X19 X20 X21 X22 X23 X24 X25 X26 X27 X28 X29 X30 X31 X32 X33 DL;
CARDS;
6.8 6.7 7.2 6.7 6.5 7.1 7.1 7.0 7.3 6.9 6.9 6.8 6.2 6.4 5.6 6.6 6.8 7.2
5.8 9.1
6.0 6.95 6 5.8 8.2 7.9 7.9 7.7 6.4 6.1 6.1 6.4 7.9 0.01
;
DATA CC; SET DD;
T=x1+x2+x3+x4+x5+x6+x7+x8+x9+x10+x11+x12+x13+x14+x15+x16+x17+x18+x19+x20+
x21+x22+x23+x24+x25+x26+x27+x28+x29+x30++x31+x32+x33;
DO U=DL TO 500-DL BY DL;
INTEG=0;
FUN=U*7.8656E-44**-1*(U**33)*EXP(T*-U);
FUN2=DL**1*FUN;
INTEG+FUN2;
OUTPUT;
END;
*PROC PRINT DATA=CC;RUN;DATA DT1; SET CC; RUN;
PROC SORT DATA=DT1 OUT=DT1_SORT; BY INTEG; RUN;
PROC PRINT; VAR U INTEG; RUN;

Appendix A2
Plot for uniform prior using exponential

DATA DD;
INPUT X1 X2 X3 X4 X5 X6 X7 X8 X9 X10 X11 X12 X13 X14 X15 X16 X17
X18 X19 X20 X21 X22 X23 X24 X25 X26 X27 X28 X29 X30 X31 X32 X33 DL;
CARDS;
6.8 6.7 7.2 6.7 6.5 7.1 7.1 7.0 7.3 6.9 6.9 6.8 6.2 6.4 5.6 6.6 6.8 7.2
5.8 9.1
6.0 6.95 6 5.8 8.2 7.9 7.9 7.7 6.4 6.1 6.1 6.4 7.9 0.01
;
DATA CC; SET DD;
T=x1+x2+x3+x4+x5+x6+x7+x8+x9+x10+x11+x12+x13+x14+x15+x16+x17+x18+x19+x20+
x21+x22+x23+x24+x25+x26+x27+x28+x29+x30++x31+x32+x33;
DO U=DL TO 0.5-DL BY DL;
INTEG=0;
FUN=U*7.8656E-44**-1*(U**33)*EXP(T*-U);
FUN2=DL**1*FUN;
INTEG+FUN2;
OUTPUT;
END;
AXIS1 Length=2 IN;
AXIS2 Length=4 IN;
PROC GPLOT DATA=CC;
SYMBOL1 INTERPOL=JOIN;
Plot INTEG*U=1;
VAXIS=AXIS HAIS1;
HAXIS=AXIS2;
RUN;

Appendix B
Mean and variance using gamma as an informative prior
DATA DD;
INPUT X1 X2 X3 X4 X5 X6 X7 X8 X9 X10 X11 X12 X13 X14 X15 X16 X17
X18 X19 X20 X21 X22 X23 X24 X25 X26 X27 X28 X29 X30 X31 X32 X33 A1 B1 DL;
CARDS;
6.8 6.7 7.2 6.7 6.5 7.1 7.1 7.0 7.3 6.9 6.9 6.8 6.2 6.4 5.6 6.6 6.8
7.2 5.8 9.1 6.0 6.95 6 5.8 8.2 7.9 7.9 7.7 6.4 6.1 6.1 6.4 7.9 0.01 40.01
0.01
;
DATA CC; SET DD;
T=x1+x2+x3+x4+x5+x6+x7+x8+x9+x10+x11+x12+x13+x14+x15+x16+x17+x18+x19+x20+
x21+x22+x23+x24+x25+x26+x27+x28+x29+x30+x31+x32+x33;
DO U=DL TO 500-DL BY DL;
PDF=2.4362E-45**-1*U*
(U**((A1+33)-1))*(EXP(-U*(B1+T)));
FUN2=DL**1*PDF;
INTEG+FUN2;
FUN3=2.4362E-45**-1*U**2*
(U**((A1+33)-1))*(EXP(-U*(B1+T)));
FUN4=DL**1*FUN3;
INTEG1+FUN4;
VARR=INTEG1-INTEG**2;
END;
PROC PRINT DATA=CC; VAR VARR INTEG;
RUN;

Appendix C
Mean and variance Jeffreys prior
DATA DD;
INPUT X1 X2 X3 X4 X5 X6 X7 X8 X9 X10 X11 X12 X13 X14 X15 X16 X17
X18 X19 X20 X21 X22 X23 X24 X25 X26 X27 X28 X29 X30 X31 X32 X33 DL;
CARDS;
6.8 6.7 7.2 6.7 6.5 7.1 7.1 7.0 7.3 6.9 6.9 6.8 6.2 6.4 5.6 6.6 6.8 7.2
5.8 9.1
6.0 6.95 6 5.8 8.2 7.9 7.9 7.7 6.4 6.1 6.1 6.4 7.9 0.01
;
DATA CC; SET DD;
T=x1+x2+x3+x4+x5+x6+x7+x8+x9+x10+x11+x12+x13+x14+x15+x16+x17+x18+x19+x20+
x21+x22+x23+x24+x25+x26+x27+x28+x29+x30++x31+x32+x33;
DO U=DL TO 500-DL BY DL;
INTEG=0;
FUN=U*5.3879E-43**-1*(U**-1)*(U**33)*EXP(T*-U);
FUN2=DL**1*FUN;
INTEG+FUN2;
OUTPUT;
END;
*PROC PRINT DATA=CC;RUN;DATA DT1; SET CC;RUN;
PROC SORT DATA=DT1 OUT=DT1_SORT; BY INTEG; RUN;
PROC PRINT; VAR U INTEG; RUN;

Appendix C1
Mode using Gamma distribution as informative prior
DATA DD;
INPUT X1 X2 X3 X4 X5 X6 X7 X8 X9 X10 X11 X12 X13 X14 X15 X16 X17
X18 X19 X20 X21 X22 X23 X24 X25 X26 X27 X28 X29 X30 X31 X32 X33 A1 B1 DL;
CARDS;
6.8 6.7 7.2 6.7 6.5 7.1 7.1 7.0 7.3 6.9 6.9 6.8 6.2 6.4 5.6 6.6 6.8
7.2 5.8 9.1 6.0 6.95 6 5.8 8.2 7.9 7.9 7.7 6.4 6.1 6.1 6.4 7.9 0.01 40.01
0.01
;
DATA CC; SET DD;
T=x1+x2+x3+x4+x5+x6+x7+x8+x9+x10+x11+x12+x13+x14+x15+x16+x17+x18+x19+x20+
x21+x22+x23+x24+x25+x26+x27+x28+x29+x30+x31+x32+x33;
DO U=DL TO 500-DL BY DL;
INTEG=0;
PDF=2.4362E-45**-1*U*
(U**((A1+33)-1))*(EXP(-U*(B1+T)));
FUN2=DL**1*PDF;
INTEG+FUN2;
OUTPUT;
END;
*PROC PRINT DATA=CC;RUN;DATA DT1; SET CC; RUN;
PROC SORT DATA=DT1 OUT=DT1_SORT; BY INTEG; RUN;
PROC PRINT; VAR U INTEG; RUN;

Appendix C2
Plot using Gamma informative
DATA DD;
INPUT X1 X2 X3 X4 X5 X6 X7 X8 X9 X10 X11 X12 X13 X14 X15 X16 X17
X18 X19 X20 X21 X22 X23 X24 X25 X26 X27 X28 X29 X30 X31 X32 X33 A1 B1 DL;
CARDS;
6.8 6.7 7.2 6.7 6.5 7.1 7.1 7.0 7.3 6.9 6.9 6.8 6.2 6.4 5.6 6.6 6.8
7.2 5.8 9.1 6.0 6.95 6 5.8 8.2 7.9 7.9 7.7 6.4 6.1 6.1 6.4 7.9 0.01 40.01
0.01
;
DATA CC; SET DD;
T=x1+x2+x3+x4+x5+x6+x7+x8+x9+x10+x11+x12+x13+x14+x15+x16+x17+x18+x19+x20+
x21+x22+x23+x24+x25+x26+x27+x28+x29+x30+x31+x32+x33;
DO U=DL TO 0.26-DL BY DL;
INTEG=0;
PDF=2.4362E-45**-1*U*
(U**((A1+33)-1))*(EXP(-U*(B1+T)));
FUN2=DL**1*PDF;
INTEG+FUN2;
OUTPUT;
END;
AXIS1 Length=2 IN;
AXIS2 Length=4 IN;
PROC GPLOT DATA=CC;
SYMBOL1 INTERPOL=JOIN;
Plot INTEG*U=1;
VAXIS=AXIS HAIS1;
HAXIS=AXIS2;
RUN;

Appendix D
Elicitation gamma informative
/*fitted prior predictive probabilities*/
DATA D1; N=20; DA=1;DL=0.01;
DO A1=0.01 TO 0.01 BY DA; DO B1=0.01 TO 40.01 BY DA;
X=2;
CL1=0;
DO U=DL TO 1-DL BY DL;
PPD=U**(A1+N-1)*EXP(-U*(B1+X))*B1**A1/(GAMMA(A1)*GAMMA(X+1));
GP1=(DL**1)*PPD; CL1+GP1;
*OUTPUT; END; *OUTPUT; END; END;
DATA D2; SET D1; IF X=2;
*PROC PRINT DATA=D2; RUN;
PROC SORT DATA=D2;BY A1 B1 ; RUN;
PROC MEANS DATA=D2 SUM NOPRINT; VAR CL1;BY A1 B1 ;
OUTPUT OUT=D3 SUM=FCL1; *PROC PRINT DATA=D3; RUN;
/*FITTED C.L NO. 1i.e. CL2 */
DATA D4;N=15; X=0; DA=1;DL=0.01;
DO A1=0.01 TO 0.01 BY DA; DO B1=0.01 TO 40.01 BY DA;
CL2=0;
DO U=DL TO 1-DL BY DL;
PPD=U**(A1+N-1)*EXP(-U*(B1+X))*B1**A1/(GAMMA(A1)*GAMMA(X+1));
GP2=(DL**1)*PPD; CL2+GP2;
*OUTPUT; END; *OUTPUT; END; END;
DATA D5; SET D4; IF X=0;
*PROC PRINT DATA=D4; RUN;
PROC SORT DATA=D5;BY A1 B1 ; RUN;
PROC MEANS DATA=D5 SUM NOPRINT; VAR CL2;BY A1 B1 ;
OUTPUT OUT=D6 SUM=FCL2; *PROC PRINT DATA=D5; RUN;
/*FITTED C.L NO. 1i.e. CL2 */
/*CALCULATION OF FUNCTION SAI*/
DATA DDD; MERGE D3 D6;
SAI=ABS(FCL1-0.0005)+ABS(FCL2-0.00001);
PROC PRINT DATA=DDD; RUN;
/* MINIMUM VALUE OF FUNCTION SAI*/
DATA DD1; SET DDD;
PROC SORT; BY SAI;
PROC PRINT DATA=DD1 (OBS=10);
VAR B1 A1 SAI; RUN;

Appendix E
Variance Emergency calls using uniform prior
DATA DD;
INPUT T1 T2 T3 T4 T5 DL;
CARDS;
84 1 70 4 5 0.05
;
DATA CC;
SET DD;
DO U1=DL TO 1-DL BY DL;
DO U2=DL TO 1-U1-DL BY DL;
DO U3=DL TO 1-U1-U2-DL BY DL;
DO U4=DL TO 1-U1-U2-U3-DL BY DL;
PDF= U1*1.1664E-75**-1*(U1**T1)*(U2**T2)*(U3**T3)*(U4**T4)*((1-U1-U2-U3U4)**T5);
FUN2=DL**4*PDF;
INTEG+FUN2;
FUN3=U1**2*1.1664E-75**-1*(U1**T1)*(U2**T2)*(U3**T3)*(U4**T4)*((1-U1-U2U3-U4)**T5);
FUN4=DL**4*FUN3;
INTEG1+FUN4;
VARR=INTEG1-INTEG**2;
END;END; END; END;
PROC PRINT DATA=CC; VAR VARR INTEG;
RUN;

Appendix E.1.
Variance of fake calls using uniform prior
DATA DD;
INPUT T1 T2 T3 T4 T5 DL;
CARDS;
84 1 70 4 5 0.05
;
DATA CC;
SET DD;
DO U1=DL TO 1-DL BY DL;
DO U2=DL TO 1-U1-DL BY DL;
DO U3=DL TO 1-U1-U2-DL BY DL;
DO U4=DL TO 1-U1-U2-U3-DL BY DL;
PDF= U2*1.1664E-75**-1*(U1**T1)*(U2**T2)*(U3**T3)*(U4**T4)*((1-U1-U2-U3U4)**T5);
FUN2=DL**4*PDF;
INTEG+FUN2;
FUN3=U2**2*1.1664E-75**-1*(U1**T1)*(U2**T2)*(U3**T3)*(U4**T4)*((1-U1-U2U3-U4)**T5);
FUN4=DL**4*FUN3;
INTEG1+FUN4;
VARR=INTEG1-INTEG**2;
END;END; END; END;
PROC PRINT DATA=CC; VAR VARR INTEG;
RUN;

Appendix E. 2.
Variance of wrong calls using uniform prior
DATA DD;
INPUT T1 T2 T3 T4 T5 DL;
CARDS;
84 1 70 4 5 0.05
;
DATA CC;
SET DD;
DO U1=DL TO 1-DL BY DL;
DO U2=DL TO 1-U1-DL BY DL;
DO U3=DL TO 1-U1-U2-DL BY DL;
DO U4=DL TO 1-U1-U2-U3-DL BY DL;
PDF=U3*1.1664E-75**-1*(U1**T1)*(U2**T2)*(U3**T3)*(U4**T4)*((1-U1-U2-U3U4)**T5);
FUN2=DL**4*PDF;
INTEG+FUN2;
FUN3=U3**2*1.1664E-75**-1*(U1**T1)*(U2**T2)*(U3**T3)*(U4**T4)*((1-U1-U2U3-U4)**T5);
FUN4=DL**4*FUN3;
INTEG1+FUN4;
VARR=INTEG1-INTEG**2;
END;END; END; END;
PROC PRINT DATA=CC; VAR VARR INTEG;
RUN;

Appendix E. 3.
Variance of abusing calls using uniform prior
DATA DD;
INPUT T1 T2 T3 T4 T5 DL;
CARDS;
84 1 70 4 5 0.05
;
DATA CC;
SET DD;
DO U1=DL TO 1-DL BY DL;
DO U2=DL TO 1-U1-DL BY DL;
DO U3=DL TO 1-U1-U2-DL BY DL;
DO U4=DL TO 1-U1-U2-U3-DL BY DL;
PDF=U4*1.1664E-75**-1*(U1**T1)*(U2**T2)*(U3**T3)*(U4**T4)*((1-U1-U2-U3U4)**T5);
FUN2=DL**4*PDF;
INTEG+FUN2;
FUN3=U4**2*1.1664E-75**-1*(U1**T1)*(U2**T2)*(U3**T3)*(U4**T4)*((1-U1-U2U3-U4)**T5);
FUN4=DL**4*FUN3;
INTEG1+FUN4;
VARR=INTEG1-INTEG**2;
END;END; END; END;
PROC PRINT DATA=CC; VAR VARR INTEG;
RUN;

Appendix E.4
Variance of distorted calls using uniform prior

DATA DD;
INPUT T1 T2 T3 T4 T5 DL;
CARDS;
84 1 70 4 5 0.05
;
DATA CC;
SET DD;
DO U1=DL TO 1-DL BY DL;
DO U2=DL TO 1-U1-DL BY DL;
DO U3=DL TO 1-U1-U2-DL BY DL;
DO U4=DL TO 1-U1-U2-U3-DL BY DL;
PDF=(1-U1-U2-U3-U4)*1.1664E-75**1*(U1**T1)*(U2**T2)*(U3**T3)*(U4**T4)*((1-U1-U2-U3-U4)**T5);
FUN2=DL**4*PDF;
INTEG+FUN2;
FUN3=(1-U1-U2-U3-U4)**2*1.1664E-75**1*(U1**T1)*(U2**T2)*(U3**T3)*(U4**T4)*((1-U1-U2-U3-U4)**T5);
FUN4=DL**4*FUN3;
INTEG1+FUN4;
VARR=INTEG1-INTEG**2;
END;END; END; END;
PROC PRINT DATA=CC; VAR VARR INTEG;
RUN;

Appendix F
Variance of Emergency calls using informative prior
DATA DD;
INPUT T1 T2 T3 T4 T5 A1 A2 A3 A4 A5 DL;
CARDS;
84 1 70 4 5 20 20 20 1 1 0.05
;
DATA CC;
SET DD;
DO U1=DL TO 1-DL BY DL;
DO U2=DL TO 1-U1-DL BY DL;
DO U3=DL TO 1-U1-U2-DL BY DL;
DO U4=DL TO 1-U1-U2-U3-DL BY DL;;
PDF=U1*(1.2983E-113**-1)*U1**(T1+A1-1)*U2**(T2+A2-1)*U3**(T3+A31)*U4**(T4+A4-1)*(1-U1-U2-U3-U4)**(T5+A5-1);
FUN2=DL**4*PDF;
INTEG+FUN2;
FUN3=U1**2*(1.2983E-113**-1)*U1**(T1+A1-1)*U2**(T2+A2-1)*U3**(T3+A31)*U4**(T4+A4-1)*(1-U1-U2-U3-U4)**(T5+A5-1);
FUN4=DL**4*FUN3;
INTEG1+FUN4;
VARR=INTEG1-INTEG**2;
END;END; END;END;
PROC PRINT DATA=CC; VAR VARR INTEG;
RUN;

Appendix F.1.
Variance of Fake calls using informative prior
DATA DD;
INPUT T1 T2 T3 T4 T5 A1 A2 A3 A4 A5 DL;
CARDS;
84 1 70 4 5 20 20 20 1 1 0.05
;
DATA CC;
SET DD;
DO U1=DL TO 1-DL BY DL;
INTEG=0;
DO U2=DL TO 1-U1-DL BY DL;
DO U3=DL TO 1-U1-U2-DL BY DL;
DO U4=DL TO 1-U1-U2-U3-DL BY DL;;
PDF=U2*(1.2983E-113**-1)*U1**(T1+A1-1)*U2**(T2+A2-1)*U3**(T3+A31)*U4**(T4+A4-1)*(1-U1-U2-U3-U4)**(T5+A5-1);
FUN2=DL**4*PDF;
INTEG+FUN2;
END; END; END;
OUTPUT;
END;
*PROC PRINT DATA=CC;RUN;DATA DT1; SET CC; RUN;
PROC SORT DATA=DT1 OUT=DT1_SORT; BY INTEG; RUN;
PROC PRINT; VAR U2 INTEG; RUN;

Appendix F.2.
Variance of Wrong calls using informative prior

DATA DD;
INPUT T1 T2 T3 T4 T5 A1 A2 A3 A4 A5 DL;
CARDS;
84 1 70 4 5 20 20 20 1 1 0.05
;
DATA CC;
SET DD;
DO U1=DL TO 1-DL BY DL;
DO U2=DL TO 1-U1-DL BY DL;
DO U3=DL TO 1-U1-U2-DL BY DL;
DO U4=DL TO 1-U1-U2-U3-DL BY DL;;
PDF=U3*(1.2983E-113**-1)*U1**(T1+A1-1)*U2**(T2+A2-1)*U3**(T3+A31)*U4**(T4+A4-1)*(1-U1-U2-U3-U4)**(T5+A5-1);
FUN2=DL**4*PDF;
INTEG+FUN2;
FUN3=U3**2*(1.2983E-113**-1)*U1**(T1+A1-1)*U2**(T2+A2-1)*U3**(T3+A31)*U4**(T4+A4-1)*(1-U1-U2-U3-U4)**(T5+A5-1);
FUN4=DL**4*FUN3;
INTEG1+FUN4;
VARR=INTEG1-INTEG**2;
END;END; END;END;
PROC PRINT DATA=CC; VAR VARR INTEG;
RUN;

Appendix F.3
Variance of Abusing calls using informative prior
DATA DD;
INPUT T1 T2 T3 T4 T5 A1 A2 A3 A4 A5 DL;
CARDS;
84 1 70 4 5 20 20 20 1 1 0.05
;
DATA CC;
SET DD;
DO U1=DL TO 1-DL BY DL;
DO U2=DL TO 1-U1-DL BY DL;
DO U3=DL TO 1-U1-U2-DL BY DL;
DO U4=DL TO 1-U1-U2-U3-DL BY DL;;
PDF=U4*(1.2983E-113**-1)*U1**(T1+A1-1)*U2**(T2+A2-1)*U3**(T3+A31)*U4**(T4+A4-1)*(1-U1-U2-U3-U4)**(T5+A5-1);
FUN2=DL**4*PDF;
INTEG+FUN2;
FUN3=U4**2*(1.2983E-113**-1)*U1**(T1+A1-1)*U2**(T2+A2-1)*U3**(T3+A31)*U4**(T4+A4-1)*(1-U1-U2-U3-U4)**(T5+A5-1);
FUN4=DL**4*FUN3;
INTEG1+FUN4;
VARR=INTEG1-INTEG**2;
END;END; END;END;
PROC PRINT DATA=CC; VAR VARR INTEG;
RUN;

Appendix F.4.
Variance of Distorted calls using informative prior
DATA DD;
INPUT T1 T2 T3 T4 T5 A1 A2 A3 A4 A5 DL;
CARDS;
84 1 70 4 5 20 20 20 1 1 0.05
;
DATA CC;
SET DD;
DO U1=DL TO 1-DL BY DL;
DO U2=DL TO 1-U1-DL BY DL;
DO U3=DL TO 1-U1-U2-DL BY DL;
DO U4=DL TO 1-U1-U2-U3-DL BY DL;;
PDF=(1-U1-U2-U3-U4)*(1.2983E-113**-1)*U1**(T1+A1-1)*U2**(T2+A21)*U3**(T3+A3-1)*U4**(T4+A4-1)*(1-U1-U2-U3-U4)**(T5+A5-1);
FUN2=DL**4*PDF;
INTEG+FUN2;
FUN3=(1-U1-U2-U3-U4)**2*(1.2983E-113**-1)*U1**(T1+A1-1)*U2**(T2+A21)*U3**(T3+A3-1)*U4**(T4+A4-1)*(1-U1-U2-U3-U4)**(T5+A5-1);
FUN4=DL**4*FUN3;
INTEG1+FUN4;
VARR=INTEG1-INTEG**2;
END;END; END;END;
PROC PRINT DATA=CC; VAR VARR INTEG;
RUN;

Appendix G
Mode using Jeffreys prior
DATA DD;
INPUT X1 X2 X3 X4 X5 X6 X7 X8 X9 X10 X11 X12 X13 X14 X15 X16 X17
X18 X19 X20 X21 X22 X23 X24 X25 X26 X27 X28 X29 X30 X31 X32 X33 DL;
CARDS;
6.8 6.7 7.2 6.7 6.5 7.1 7.1 7.0 7.3 6.9 6.9 6.8 6.2 6.4 5.6 6.6 6.8 7.2
5.8 9.1
6.0 6.95 6 5.8 8.2 7.9 7.9 7.7 6.4 6.1 6.1 6.4 7.9 0.01
;
DATA CC; SET DD;
T=x1+x2+x3+x4+x5+x6+x7+x8+x9+x10+x11+x12+x13+x14+x15+x16+x17+x18+x19+x20+
x21+x22+x23+x24+x25+x26+x27+x28+x29+x30++x31+x32+x33;
DO U=DL TO 0.30-DL BY DL;
INTEG=0;
FUN=U*5.3879E-43**-1*(U**-1)*(U**33)*EXP(T*-U);
FUN2=DL**1*FUN;
INTEG+FUN2;OUTPUT;
END;
*PROC PRINT DATA=CC;RUN;DATA DT1; SET CC; RUN;
PROC SORT DATA=DT1 OUT=DT1_SORT; BY INTEG; RUN;
PROC PRINT; VAR U INTEG; RUN;

Appendix G. 1.
Mode using Jeffreys prior by using Wolfram Mathematica

Appendix G.2.
Plot using Jeffreys prior
DATA DD;
INPUT X1 X2 X3 X4 X5 X6 X7 X8 X9 X10 X11 X12 X13 X14 X15 X16 X17
X18 X19 X20 X21 X22 X23 X24 X25 X26 X27 X28 X29 X30 X31 X32 X33 DL;
CARDS;
6.8 6.7 7.2 6.7 6.5 7.1 7.1 7.0 7.3 6.9 6.9 6.8 6.2 6.4 5.6 6.6 6.8 7.2
5.8 9.1
6.0 6.95 6 5.8 8.2 7.9 7.9 7.7 6.4 6.1 6.1 6.4 7.9 0.01
;
DATA CC; SET DD;
T=x1+x2+x3+x4+x5+x6+x7+x8+x9+x10+x11+x12+x13+x14+x15+x16+x17+x18+x19+x20+
x21+x22+x23+x24+x25+x26+x27+x28+x29+x30++x31+x32+x33;
DO U=DL TO 0.30-DL BY DL;
INTEG=0;
FUN=U*5.3879E-43**-1*(U**-1)*(U**33)*EXP(T*-U);
FUN2=DL**1*FUN;
INTEG+FUN2;
OUTPUT;
END;
AXIS1 Length=2 IN;
AXIS2 Length=4 IN;
PROC GPLOT DATA=CC;
SYMBOL1 INTERPOL=JOIN;
Plot INTEG*U=1;
VAXIS=AXIS HAIS1;
HAXIS=AXIS2;
RUN;

Appendix H:
Elicitation
DATA D1;N=100; DA=1; DL=0.;X1=10; X2=20-X1; X3=50-X1-X2; X4=90-X1-X2-X3;
DO A1=1 TO 10 BY DA; DO A2=1 TO 10 BY DA;DO A3=1 TO 10 BY DA; DO A4=1 TO 10 BY DA;
CL1=0;
DO T1=DL TO 1-DL BY DL;
DO T2=DL TO 1-T1-DL BY DL;
DO T3=DL TO 1-T1-T2-DL BY DL;
PPD=(GAMMA(N+1)*GAMMA(A1+A2+A3+A4)*T1**(X1+A1-1)*T2**(X2+A2-1)*T3**(X3+A3-1)*
(1-T1-T2-T3)**(X4+A41))/(GAMMA(X2+1)*GAMMA(X1+1)*GAMMA(X3+1)*GAMMA(X4+1)*GAMMA(A1)*
GAMMA(A2)*GAMMA(A3)*GAMMA(A4));
GP1=(DL**3)*PPD; CL1+GP1;
OUTPUT; END; OUTPUT; END; END; END; END; END; END;
DATA D2; SET D1; IF X1=10; IF X2=20-X1; IF X3=50-X1-X2; IF X4=90-X1-X2-X3;
*PROC PRINT DATA=D2; RUN;
PROC SORT DATA=D2;BY A1 A2 A3 A4; RUN;
PROC MEANS DATA=D2 SUM NOPRINT; VAR CL1;BY A1 A2 A3 A4;
OUTPUT OUT=D3 SUM=FCL1; *PROC PRINT DATA=D3; RUN;

DATA D4; N=100; DA=1; DL=0.1;X1=15; X2=25-X1; X3=40-X1-X2; X4=80-X1-X2-X3;


DO A1=1 TO 10 BY DA; DO A2=1 TO 10 BY DA;DO A3=1 TO 10 BY DA; DO A4=1 TO 10 BY DA;
CL2=0;
DO T1=DL TO 1-DL BY DL;
DO T2=DL TO 1-T1-DL BY DL;
DO T3=DL TO 1-T1-T2-DL BY DL;
PPD=(GAMMA(N+1)*GAMMA(A1+A2+A3+A4)*T1**(X1+A1-1)*T2**(X2+A2-1)*T3**(X3+A3-1)*
(1-T1-T2-T3)**(X4+A41))/(GAMMA(X2+1)*GAMMA(X1+1)*GAMMA(X3+1)*GAMMA(X4+1)*GAMMA(A1)*

GAMMA(A2)*GAMMA(A3)*GAMMA(A4));
GP2=(DL**3)*PPD; CL2+GP2;
OUTPUT; END; *OUTPUT; END; END; END; END; END; END;
DATA D5;SET D4; IF X1=15; IF X2=25-X1; IF X3=40-X1-X2; IF X4=80-X1-X2-X3;
*PROC PRINT DATA=D5; RUN;
PROC SORT DATA=D5;BY A1 A2 A3 A4; RUN;
PROC MEANS DATA=D5 SUM NOPRINT; VAR CL2;BY A1 A2 A3 A4;
OUTPUT OUT=D6 SUM=FCL2; *PROC PRINT DATA=D6; RUN;
/*CALCULATION OF FUNCTION SAI*/
DATA DDD; MERGE D3 D6 ;
SAI=ABS(FCL1-0.05)+ABS(FCL2-0.04);
*PROC PRINT DATA=DDD; RUN;
/* MINIMUM VALUE OF FUNCTION SAI*/
DATA DD1; SET DDD;
PROC SORT; BY SAI;
PROC PRINT DATA=DD1 (OBS=10);
VAR A1 A2 A3 A4 SAI; RUN;

References:

A Bayesian Model for Supervised Clustering with the Dirichlet Process Prior

Bayesian Analysis of Exponentiated Gamma Distribution under Type II Censored


Samples

Hal Daume III HDAUME@ISI.EDU Daniel Marcu MARCU@ISI.EDU Information


Sciences Institute University of Southern California 4676 Admiralty Way, Suite 1001
Marina del Rey, CA 90292, USA.

International Journal of Advanced Science and Technology Vol. 49, December, 2012

Navid Feroze1 and Muhammad Aslam2 1Department of Mathematics and Statistics,


Allama Iqbal Open University, Islamabad, Pakistan 2Department of and Statistics,
Quaid-i-Azam University, Islamabad, Pakistan.

Journal of Machine Learning Research 6 (2005) 15511577 Submitted 2/05; Revised


5/05; Published 9/05.

Editor: William Cohen

T.S.Verma and J.Pearl.Equivalence and synthesis of causal models.

In Uncertainty in Articial Intelligence (UAI), pages 220227, San Francisco, 1990.


Elsevier Science Publishers B.V.(North-Holland).

You might also like