You are on page 1of 4

Joshua Marvel Lumban Gaol

29115314
CHAPTER 6
DECISION TREES AND INFLUENCE DIAGRAM
In this chapter we have illustrated the construction of decision trees and the rollback
method for identifying the optimal policy. We described an approximation method for
dealing with continuous probability distributions within decision trees and summarized some
practical applications of decision trees within decision analysis.
1. Contructing a decision tree
that two symbols are used in decision trees. A square is used to represent a decision
node and, because each branch emanating from this node presents an option, the
decision maker can choose which branch to follow. A circle, on the other hand, is
used to represent a chance node. The branches which stem from this sort of node
represent the possible outcomes of a given course of action and the branch which is
followed will be determined, not by the decision maker, but by circumstances which
lie beyond his or her control. The branches emanating from a circle are therefore
labeled with probabilities which represent the decision makers estimate of the
probability that a particular branch will be followed. Obviously, it is not sensible to
attach probabilities to the branches which stem from a square.
2. Determining the optimal policy
It can be seen that our decision tree consists of a set of policies. A policy is a plan of
action stating which option is to be chosen at each decision node that might be
reached under that policy. For example, one policy would be: choose the electrical
design; if it fails, modify the design. Another policy would be: choose the electrical
design; if it fails, abandon the project. The technique for determining the optimal
policy in a decision tree is known as the rollback method. To apply this method, we
analyze the tree from right to left by considering the later decisions first.
One step-by-step procedure for turning an influence diagram into a decision tree is as
follows:
1. Identify a node with no arrows pointing into it (since there can be no loops at least
one node will be such).
2. If there is a choice between a decision node and an event node, choose the decision
node.
3. Place the node at the beginning of the tree and remove the node from the influence
diagram.
4. For the now-reduced diagram, choose another node with no arrows pointing into it. If
there is a choice a decision node should be chosen.
5. Place this node next in the tree and remove it from the influence diagram.
6. Repeat the above procedure until all the nodes have been removed from the influence
diagram.
Finally, we analyzed the process of generating decision tree representation of decision
problems and advocated the influence diagram as a key technique to facilitate decision
structuring. Despite the benefits of using decision trees some decision analysts counsel
against using them too early in the decision process before a broad perspective of the decision
problem has been obtained. For example, Chapman and Ward argue that decision trees
should often be embedded in a more wide-ranging analysis that includes assessments of the

sources of uncertainty and exploration of the decision makers objectives. We broadly agree
with this view and have therefore presented decision trees in this book as just one of many
potentially useful decision-aiding tools unlike most other decision analysis texts, which focus
almost exclusively on decision trees.

CHAPTER 8
REVISING JUDGEMENTS IN THE LIGHT OG NEW INFORMATION

BAYERS THEOREM
In Bayes theorem an initial probability estimate is known as a prior probability. Thus
the marketing managers assessment that there was an 80% probability that sales of the
calculator would reach break-even level was a prior probability. When Bayes theorem is
used to modify a prior probability in the light of new information the result is known as a
posterior probability. We will not put forward a mathematical proof of Bayes theorem here.
Instead, we will attempt to develop the idea intuitively and then show how a probability tree
can be used to revise prior probabilities.
The steps in the process which we have just applied are summarized below:
1. Construct a tree with branches representing all the possible events which can occur
and write the prior probabilities for these events on the branches.
2. Extend the tree by attaching to each branch a new branch which represents the new
information which you have obtained. On each branch write the conditional
probability of obtaining this information given the circumstance represented by the
preceding branch.
3. Obtain the joint probabilities by multiplying each prior probability by the conditional
probability which follows it on the tree.
4. Sum the joint probabilities.
5. Divide the appropriate joint probability by the sum of the joint probabilities to
obtain the required posterior probability.
APPLYING BAYES THEOREM TO A DECISION PROBLEM
We will now consider the application of Bayes theorem to a decision problem: a
process which is sometimes referred to as posterior analysis. This simply involves the use of
the posterior probabilities, rather than the prior probabilities, in the decision model.
ASSESSING THE VALUE OF NEW INFORMATION
New information can remove or reduce the uncertainty involved in a decision and
thereby increase the expected payoff. For example, if the retailer in the previous section was,
by some means, able to obtain perfectly accurate information about the summer demand then
he could ensure that his stock levels exactly matched the level of demand. This would clearly
lead to an increase in his expected profit. However, in many circumstances it may be
expensive to obtain information since it might involve, for example, the use of scientific tests,
the engagement of the services of a consultant or the need to carry out a market research
survey.
A summary of the main stages in the above analysis is given below:
1. Determine the course of action which would be chosen using only the prior
probabilities and record the expected payoff of this course of action;
2. Identify the possible indications which the new information can give;
3. For each indication:
a. Determine the probability that this indication will occur;
b. Use Bayes theorem to revise the probabilities in the light of this indication;

c.

Determine the best course of action in the light of this indication (i.e. using the
posterior probabilities) and the expected payoff of this course of action;
4. Multiply the probability of each indication occurring by the expected payoff of the
course of action which should be taken if that indication occurs and sum the resulting
products. This will give the expected payoff with imperfect information;
5. The expected value of the imperfect information is equal to the expected payoff with
imperfect information less the expected payoff of the course of action which would be
selected using the prior probabilities.
PRACTICAL CONSIDERATION
We will now outline a number of examples of the application of the methods we have
just discussed and consider some of the practical problems involved. Clearly, it is easier to
identify the expected value of perfect as opposed to imperfect information, and we
recommend that, in general, calculating the EVPI should be the first step in any informationevaluation exercise. The EVPI can act as a useful screen, since some sources of information
may prove to be too expensive, even if they were to offer perfectly reliable data, which is
unlikely.
In this chapter we have discussed the role that new information can play in revising
the judgments of a decision maker. We argued that Bayes theorem shows the decision maker
how his or her judgments should be modified in the light of new information, and we showed
that this revision will depend both upon the vagueness of the prior judgment and the
reliability of the new information. Of course, receiving information is often a sequential
process. Your prior probability will reflect the information you have received up to the point
in time when you make your initial probability assessment. As each new instalment of
information arrives, you may continue to revise your probability. The posterior probability
you had at the end of last week may be treated as the prior probability this week, and be
revised in the light of this weeks information.
We also looked at how the value of new information can be assessed. The expected
value of perfect information was shown to be a useful measure of the maximum amount that
it would be worth paying for information. Calculating the expected value of imperfect
information was seen to be a more involved process, because the decision maker also has to
judge the reliability of the information. Because of this, we stressed the importance of
sensitivity analysis, which allows the decision maker to study the effect of changes in these
assessments.

You might also like