You are on page 1of 73

AUDITING: A RISK

ANALYSIS APPROACH

5th edition

Larry F. Konrath

Electronic Presentation
by Harold
O. Wilson
1
n = 195
Chapter 10

N = 10,000
2
KEY CONCEPTS OVERVIEW
 “Mean per unit” sampling (Classical
Variables Sampling)
 “Difference estimation” sampling (a
variation comparing audited value
to book value)
 “Probability-proportional-to-size”
sampling (viewing the “Dollar” as
the sampling unit)
3
LEARNING
OBJECTIVES
 Determine appropriate sampling methods
 Apply three sampling methods
 Evaluate sample results; either…
 Do not reject the book value (errors
tolerable) as being “fair,” given a stated
risk of improper “acceptance.”
 Reject the book value (errors intolerable)
as being “fair,” given a stated risk of
improper rejection.

4
INTRODUCTION

 Substantive tests focus on dollar amounts (not


percentages). Auditors test general ledger
data against “fairness of presentation.”
 Variables sampling can aid in estimating
(a) dollar amounts of transactions, or
(b) account balances. Sample results are
“extended” to the population.
 PPS sampling can be used to estimate the dollar
amount of overstatement errors.

5
THREE APPROACHES to
VARIABLES SAMPLING
• Mean per unit (MPU): Calculating
the sample’s mean, , then multiplying
it by items in the population, N, to create
the “auditor’s best estimate to date” of
the population’s total.

6
APPROACHES…

 Difference estimation: Calculating a mean


difference between the book values, BV,
and the audited values, AV of samples.
 The “population of differences” has its
own mean, , which when multiplied by
N allows an estimate of the population’s
overstatement or understatement.
An AJE may be derived.
7
APPROACHES…

PPS--emphasis is on estimating the dollar


amount of possible overstatements of
assets (or understatements of debts).
Each dollar is considered to be
a sampling unit; the larger
the “host item,” the higher the odds
of its inclusion in the sample!
8
Note on PPS
Probability-proportional-to-size sampling
(PPS) is a variation of attribute
sampling. In auditing, however, an
upper limit is calculated and expressed
in dollars (rather than as a percentage).
This is to be compared
with the tolerable error, as
pre-stated by the auditor.
9
Inference about populations
from sample data.
10
FAQ?
What are the critical audit factors
in deciding on sample size, n?

AR
DR =
(CR x IR)
There is a risk that neither the
client nor the CPA find MM$.
11
Decisions Using Samples
(less than complete data)
 There is a probability of rejecting the truth
(a Type I risk, designated as alpha).
 There is a probability of failing to reject falsity
(a Type II risk, designated as beta).
 There is the probability of correctly believing
the truth.
The total of these possibilities’ odds
must, of course, total 100%.
12
Decisions Using Samples
in Auditing
 There is a risk of rejecting a proper BV
(Type I risk or alpha risk; e.g., 5%).
 There is a risk of believing a improper BV
(DR, a Type II risk or beta risk; e.g., 7%).
 There is the possibility of correctly using a
proper and unadjusted BV in the financials.
The probability of the latter is, of course,
1–( )
13
Decisions Using Samples
in Auditing
 Sampling risk =
 Of course, if one rejects truth, additional
testing (increasing n) should ultimately
lead back to truth!
 “Alpha errors” self-correct in the search
for the truth, given time and money!
Sampling risk can approach, but will not
be reduced to ZERO, even
if n = N - 1. 14
Decisions Using Samples
(Concepts & Audits)
If classical hypothesis testing were being
applied, the auditor might…
 use BV as the original hypothesis, Ho, and
BV plus/minus the maximum tolerable error
as the alternative hypothesis, Ha, and then..
 Calculate a probability for , and for .
The latter is the auditor’s DR, which
should be reduced to 5-10% or less.
15
Audit Applications

 Assessing CR too low and/or assessing DR


too high impacts audit effectiveness (n
might be smaller than what is necessary).
 Assessing CR too high and/or assessing DR
too low impacts audit efficiency (n might be
larger than what is necessary).
 The latter “overcharges” the client!

16
Auditor’s FAQ?
What are the critical factors in
Estimation of Variables Sampling?

Materiality (i.e., tolerance or


precision needed), n, risks, &
the standard deviation
of the population.
17
Definition: “Standard
Deviation” of a population
A specific type of
measurement of the degree
of heterogeneity, i.e., scatter
around a mean of N items.
“A standard deviation of ZERO
means a sample of 1 tells all.”
18
Notation: “Standard
Deviation” of a sample

The standard deviation


of a sample of n items,
from the N items in the
population.
A sample subset has a mean and a standard
deviation, just as a population does.
19
Precision

Precision: a stated range within which the


population mean lies, associated with a
stated reliability, R (i.e., probability).
 In auditing, the upper and lower limits of
the range (UU and UL) set the maximum
“stretch” of non-material misstatements.
 Thus, half the range effectively becomes the
tolerable error in either direction.
20
Reliability [R]
(Confidence Level)
Reliability: The likelihood, or probability, that
the sample’s mean plus/minus a stated
tolerance, contains the population’s mean.
 In auditing, R is the degree of confidence
the auditor can place on sampling results,
e.g., 95% is desirable. [R = 1 – alpha risk]
 As in all of life, the more reliability sought,
the more the information needed!
As n increases, R increases!
21
Illustration: Audit program
for inventory test counts
 Book Value (G/L) = $20,000,000
 Population size, N = 20,000 bins, i.e.,
“large”
 Materiality in dollars: M$ = $1,000,000
 Alpha risk = 5%; R = 95

Assume the auditor assessed IR = 100%, and


CR = 70%, and that beta risk be 7% or less.

22
Illustration: Audit program
requires inventory test counts
 Mean Acceptable Tolerable Error:
MATE = M$ / N = $50 per bin
 Pilot n = 40 items
 Sample’s standard deviation:
= $370

Projected minimum sample size?


23
The AUDITOR’S n
 Investigators cannot control the standard
deviations discovered, but can control n.
 The confidence interval is based on UR
factors (i.e., the number of “standard
errors” from the mean needed for a given
reliability, R), and the “standard error
of the mean” formula, .
Note: MATE = UR x
Tables exist for UR factors, given R.
24
The “Standard Error of the
Mean” Formula

If the universe standard deviation is


known, use it; otherwise, the investigator
defaults to using , and occasionally
adjusts for a “finite correction factor.”
25
We also know…

A confidence interval is built outward from


the random variable, (the mean of the n
sample units), here using R = .95, UR = 1.96:

1.96
meaning MATE = 1.96 = $50
Now, solve for n!
26
Classical Formula for n,
incorporating the alpha risk

UR 2
n=[ MATE ]
= 211 items
And…
27
Conservative Formula for n,
integrating alpha & beta risks
2
n= [ UR
K ]
MATE
= 648 items
Where, K = 1 + (UBeta / UAlpha)
The math…
28
The mathematics…

2
n= [ $370 (1.96)
$50 ] =211
n=[ $370 (1.96) (1.7551) 2
] =648
$50
where K = 1 + (1.48 / 1.96) = 1.7551
29
Evaluation of “Final” Sample
Results, n = 211
 Sample mean = $990
 Sample’s standard deviation = $360
 Standard error of the mean =
$360 = $ 24.84
211 – 1
 Conclusion?

30
Classical Evaluation of
Sample Results, n = 211
 There is a 95% probability that the true
population mean lies within the interval
of $990 (1.96 x $24.84), or between
$941 and $1039—i.e., with 5%
risk.
 Best estimate of total inventory = $990 x N,
or $19,800,000. There is a 5% risk that
the inventory is not between $18,826,20031
Evaluation of “Final” Sample
Results, n = 648
 Sample mean = $990
 Sample’s standard deviation = $360
 Standard error of the mean =
$360 = $ 14.15
648 – 1
 Conclusion?

32
Classical Evaluation of
Sample Results, n = 648
 There is a 95% probability that the true
population mean lies within the interval
of $990 (1.96 x $14.15), or between
$962 and $1018—i.e., with 5%
risk.
 Best estimate of total inventory = $990 x N,
or $19,800,000. There is a 5% risk that
the inventory is not between $19,240,00033
Conservative Observations
(n = 648)
 An upper possibility of $20,360,000 less the
M$ of $1,000,000 is $19,360,000.
 A lower possibility of $19,240,000 plus the
M$ of $1,000,000 is $20,240,000.
 Conclusion: The the alpha and beta risks
are tolerable, and BV being between the
$19,360,000 and the $20,240,00 “extreme
case” scenario leads to no suggested AJE.

34
AUDIT MEANING of the
CONFIDENCE INTERVALS
 With R=.95, there is a 95% probability that
the [fixed] true universe mean, , is
contained within the [random] confidence
interval that the auditor constructed.
 The client’s assertion is within that range,
but no one knows the “real answer;”
evidence failed to refute the BV.
No adjustment would be requested!
35
Gathering sample evidence:
To cease or not to cease?

If the auditor’s confidence interval


failed to engulf the client’s BV, perhaps
n would be increased before requesting
the client to recount the entire
inventory. After all, there was a 5%
chance that the audit sample was not
representative!
36
FAQ?

What is the effect of just increasing


the sample size, n, before taking
more extreme measures?
Our lone random came from a
“population of possible sample
means;” the smaller its standard
deviation, the better! So... 37
Increasing n may help
improve Achieved Precision!
An updated confidence interval
may “tighten precision,” if
THE NOW LARGER
SAMPLE’S MEAN &
STANDARD DEVIATION DO
NOT INCREASE -- IF THEY
DO, it’s almost Square 1.
38
FAQ?

If continued sampling supported believing


that a material misstatement of the account
balance was occurring, WHAT’S TO DO?

Sampling might be continued, or


account data reworked by client, or
an AJE might be proposed, or
issue a “non-standard” Audit Report.
39
Summary

 Assessing CR or DR improperly may “trigger”


improper sample sizes; thus, the auditor
makes either an “alpha error” (CR too
low) or a “beta error” (CR too high).
 The “materiality tolerance” becomes, in effect,
the distance between two possible
hypotheses (the client’s & the auditor’s).
 Auditor should sample until risks are low,
e.g., 5% or less.
40
FAQ?
If the “best estimate” vs. either limit of
the confidence interval exceeds M$
difference, should the auditor consider
more sampling to attempt to “tighten”
precision? (Note the example data.)
Interesting! The “range of
acceptability” theory supports it!

41
• Presumably, if achieved precision
is “tighter” (on both tails) than the
desired precision, “beta risk
concerns” are satisfied.
• Otherwise, since the truth could
be at an extreme point/limit,
increasing n might be wise.
42
The client’s BV is viewed in contrast to
audited values, AV, generating d values.
43
Difference Sampling

 Estimations of a populations of errors,


d, is Difference Estimation (Ratio
Estimation is slightly different.)
The mean of the [unknown]
population of random errors
should be $0; otherwise, errors
just may not be random errors!
Is fraud a possibility?
44
Difference Estimation
(a variation of MPU)
 Given: Subsidiary ledger details,
differences between the BV per client for
each sample item, and an AV per auditor
for each corresponding item.
 Generated: A population of differences, di
values, which logically would have many
non-zero components, and its own mean, ,
and standard deviation, .
45
Appling Difference Estimation

 Tie the subsidiary details to the G/L.


 Draw a pilot sample to estimate .
 Calculate the desired sample size, n, with
calculations similar to variables estimation.
 Select sample units, generating and .
 Calculate achieved precision, A’,
D=Nx , and the best estimate of
the account balance: BV D.
46
Illustration

 N = 14,000; BV = $22,400,000
 CR = 50%; IR = 50%; “Alpha Risk” = 5%.
 Pilot sample standard deviation = $464
 M$ = 3.6% of BV = $800,000
 Auditor calculations indicate desired n = 517,
and mean, d = $61.34.
 Best estimate of D = 14,000 x $61.34 = $858,760
The auditor’s conclusion, comparing $858,760
vs. the $800,000, is a judgment call!
47
“Classical” Difference
Sampling Procedures
 Generate a list of all differences (including
$0 differences) from n random
samples.
 Calculate the d list’s mean, standard
deviation, and a standard error of
differences.
 Apply traditional calculations, with the
statistical notation properly altered; N is
relevant.
48
Account Balances =
A population of dollars, not items.
49
Probability-Proportional-to-
Size
(or Dollar Unit) Sampling
 Each DOLLAR of the population = a
sample unit, as sampling every nth
dollar, e.g., every 10th, would be a plan
with higher odds that the larger
“logical sampling units” (LSUs, or host
units) get investigated.
 Every LSU of 10% or more of the
account balance would be “snagged.”
50
PPS THEORY

 Any account balance is a population of


dollars, not items, meaning N = 20,000
LSUs (e.g., inventory stock numbers),
but is viewed as N = 20,000,000 US dollar
bills, per se.
 Purported advantages include bypassing of
pilot samples, “automatic stratification,”
and special applicability to auditing.
51
PPS THEORY

 Relevant applicability to auditing presumes


that larger LSUs are more likely to have
large errors than are smaller LSUs.
 Disadvantages: LSUs that are understated
have much lower odds of selection;
LSUs with $0 FYE balances are omitted.
Remember: Low BV items can also
have any kind of errors!
52
Conditions encouraging PPS

 Auditor expects an account to have few, but


significant, overstatement errors.
 Analytical procedures, etc., lead the auditor
to suspect material overstatements, e.g.,
experience supports miscodings of R&M
purchases invoices (e.g., capitalizations).
PPS may be somewhat difficult to apply
in cases of severe understatements.
53
FAQ?
If each “dollar” has even odds of
selection, should not each LSU also
have equal odds of selection?

No! The logical unit odds become a


PPS—also, any overstated LSUs
have more odds of selection than
any understated LSUs.
54
FAQ?

If a single “dollar” selected is


invalid in some way, what is the
impact of a misstatement (error)?
The LSU (“host unit”) becomes “in
error.” All dollars of the LSU are to
some extent “erroneous” or so called
Tainted Dollars.
55
PPS Approach to
Sample Size
RF x BV
n=
TE - (AE x EF) where,
RF = Reliability factor (“z-score”)
BV = Book value
TE = Tolerable error (dollars)
AE = Anticipated error (dollars)
EF = Expansion factor 56
Applying PPS

 RF = Reliability factor, corresponding to


the auditor’s tolerable beta risk (i.e., the
risk of failing to reject false assertions),
assuming zero errors *
 EF = Expansion factor *

* From the AICPA’s Audit Sampling


Guide, 1983.
57
Illustration: Have R&M
invoices been capitalized?
 N = $3,500,000
 LSU = 3,980 purchase invoices posted
 AE = f(IR, CR, judgment) = $60,000
 M% = 3.2% of N
 TE = M$ = $111,000
n n= =700
?
 Beta risk = 5%
 RF = 3; EF = 1.6 (See text)
58
The Sampling Interval, SI

 SI = the distance between two consecutive


sample items (dollars) for use in systematic
sampling = N / n
 SI = 3,500,000 / 700 = 5000; thus, every 5000th
dollar will be sampled! Any LSU of at least
$1 x 5,000 is material.
 LSUs $5,000 will be examined, i.e., any LSU
that alone, if fictitious, would be material.

59
Projections from the
Test Exceptions

The theory: Project the errors discovered


by calculating the “percentage miss,” for
each error, and apply it to the Sampling
Interval, unless the BV > SI.
If BV > SI, include the full error.

Note: Zero errors support “No AJEs!”


60
PPS Exceptions Exhibit 10.3
SI=$5,000; n = 700; d = 5
Error B.Value Per Audit The Tainting
Projected
# --------- ----------- Error $ Error %
Error
(a) (b) (c ) (d)(*)
(d)*SI
4 $ 1,712 $ 800 $ 417 0.340
$ 1,713 ~~~~~~~~~
5 3,360 900 2,360 0.720
3,620

TOT $90,553 61
Auditor Conclusions, based on
five errors totalling $90,553
 A “tainting” percentage is inferred by, and
for, the “tainted” LSUs under $5,000.
 PE = Projected Error (best estimate):
Each discovered error is a percentage of the
LSU examined; such percentage is projected
to the $5,000 SI containing the error, and
such “projected errors” are totaled AND
ADDED to errors discovered in LSUs over
$5,000. Exhibit 10-3: $ 94,513.

62
Auditor Conclusions

 BP = Basic Precision = RF x $SI =


3 x $5,000 = $15,000
 IA = Incremental Precision Allowance =
$7541 [See Text Exhibit 10.4]
 ASR = Allowance for Sampling Risk =
BP + IA = $15,000 + $7541 = $22,541
 UEL = Upper Error Limit = PE + ASR =
$94,513 + $22,541 = $117,054
63
Auditor Conclusions

 The probability is 95% that the excess


capitalization does not exceed $117,054.
 But, comparing the UEL of $117,054 to the
M$ of $111,000 leads to concluding
that a material amount of R&M
purchase invoices were capitalized into
the Machinery & Equipment.
Increasing n to tighten precision may be advisable.
64
FAQ?
Should an AJE reducing the M&Eq
account by the actual errors discovered,
$90,553 (M%=2.6%) or 5/700 (.71%
error rate), satisfy auditor judgment?
Likely--mainly because a general adjustment
of a 4,000 item listing here (5 known errors)
may be a “stretch.” What caused these five
(overstatement) errors is of great concern.
65
Interesting Queries

• Could this mean an $90,533 overstatement


for every 5000th dollar?
• How should the relationship of the $90,553 to
the total dollars in the sample be viewed?
• What is the specific relevance of 695 samples
(and their dollar total) having 0 ERRORS?
•What would be the result of using this same
data, but in classical variables estimation?

66
FAQ?

Can PPS be applied to tests of


transactions and CR analysis?
Yes! Control Risk assessments &
tests build confidence as to what
should be in listings, not just
what is there at FYE!

67
Judgment: n, Control Risk, &
Reliability
Reliability Sample Size Needed
Level (R) and ordinal indicator (R) of testing.
Desired Low CR Med. CR High CR
High 8 9 10
Sm ? Lg
5 6 7
Med. Sm Md Lg

Sm 2 ? 3 Lg 4
Low 68
Interesting Observation

• In PPS sampling of Accounts Payable, a


scan of physical files might locate “fat
folders”-- lots of dealings.
• The creditors having zero balances at FYE,
have zero chance of selection from
subsidiary ledger printouts!
Some zero-balance confirma-
tions should be in samples!
69
Observations

 Illustrations often use large error rates,


etc., for understanding; in practice, data
processing error rates are often so small
that Poisson distribution tables are needed.
 Caution: Be careful not to suggest AJEs
for large dollars, when errors are from only
a very few items that need specific AJEs.

70
FAQ?

If a projected AJE for overstating


Accounts Receivable is proposed,
whose accounts get the subsidiary
ledger’s credit entry allocation?

71
Key Terms
 Achieved precision  Precision
 Alpha risk concepts  PPS
 Anticipated error  Projected error
 Beta risk concepts  Range of acceptability
 Confidence level  Reliability
 Desired precision  Sampling interval
 Difference estimation  Sampling risk
 LSU  Standard deviation
 MPU  Tolerable error
 Pilot sample  Variables estimation
72
End of Chapter 10

73

You might also like