You are on page 1of 27

THE ACCOUNTING REVIEW American Accounting Association

Vol. 93, No. 4 DOI: 10.2308/accr-51926


July 2018
pp. 177–202

When Do Auditors Use Specialists’ Work to Improve


Problem Representations of and Judgments about Complex
Estimates?
Emily E. Griffith
University of Wisconsin–Madison
ABSTRACT: Auditors are more likely to identify misstatements in complex estimates if they recognize problematic
patterns among an estimate’s underlying assumptions. Rich problem representations aid pattern recognition, but
auditors likely have difficulty developing them given auditors’ limited domain-specific expertise in this area. In two
experiments, I predict and find that a relational cue in a specialist’s work highlighting aggressive assumptions
improves auditors’ problem representations and subsequent judgments about estimates. However, this improvement
only occurs when a situational factor (e.g., risk) increases auditors’ epistemic motivation to incorporate the cue into
their problem representations. These results suggest that auditors do not always respond to cues in specialists’ work.
More generally, this study highlights the role of situational factors in increasing auditors’ epistemic motivation to
develop rich problem representations, which contribute to high-quality audit judgments in this and other domains
where pattern recognition is important.
Keywords: accounting estimates; audit quality; engagement risk; fair value; pattern recognition; source
credibility; specialist.

I. INTRODUCTION

A
uditors have difficulty identifying misstatements signified by problematic patterns among assumptions underlying
complex estimates when those assumptions appear reasonable individually (Public Company Accounting Oversight
Board [PCAOB] 2011; Cannon and Bedard 2017; Griffith, Hammersley, and Kadous 2015a).1 This can reduce the
quality of audits of estimates (PCAOB 2010b; Griffith et al. 2015a; Glover, Taylor, and Wu 2017a). Auditors’ difficulty
identifying these types of misstatements suggests that one cause is inadequate problem representations of estimates. Problem
representations aid pattern recognition, yet auditors report that they and their teams often lack the domain-specific expertise that
helps auditors form rich problem representations of estimates (Boritz, Kochetova-Kozloski, Robinson, and Wong 2017;
Glover, Taylor, and Wu 2017b; Bedard and Biggs 1991a, 1991b; Hammersley 2006). In this study, I examine the joint effect of

I am grateful for the contributions of my dissertation committee: Michael Bamber, Tina Carpenter, and especially Jackie Hammersley (chair) and Kathryn
Kadous. I thank Ashley Austin, Mike Ricci, Amy Tegeler, and Michael Thorsheim for their research assistance, and I also thank the auditors and their
firms who gave their time and effort to participate in this study. I appreciate helpful comments from two anonymous reviewers, Elizabeth Altiero, Tim
Bauer, Christine Earley, Jeremy Griffin, Karla Johnstone, Tracie Majors, Brian Mayhew, Christine Nolder, Mark Peecher (editor), Donald Young, doctoral
seminar participants at The University of Georgia, and workshop and conference participants at Arizona State University, The University of Georgia,
University of Illinois, University of Massachusetts Amherst, University of Wisconsin–Madison, the New England Behavioral Accounting Research Series
(hosted by Bentley University), the 2015 AAA ABO Research Conference, and the 2016 AAA Auditing Section Midyear Meeting. I am grateful for
research funding provided by the AICPA Accounting Doctoral Scholars Program, the Deloitte Foundation Doctoral Fellowship, and the AAA/Grant
Thornton Doctoral Dissertation Award.
This paper is based on my dissertation, which was completed at The University of Georgia.
Editor’s note: Accepted by Mark E. Peecher, under the Senior Editorship of Mark L. DeFond.
Submitted: June 2015
Accepted: September 2017
Published Online: October 2017

1
For example, a client’s estimate might rely upon assumptions that future revenue will increase and future expenses will decrease. Each assumption (i.e.,
increasing revenue, decreasing expenses) could be individually reasonable and supported by management’s plans to (1) open new stores to increase
revenue, and (2) cut back on sales personnel to decrease expenses. Yet, the assumptions are inconsistent with each other because the planned increase in
stores suggests that management will have to increase, not decrease, sales personnel to staff incremental stores. The assumptions in combination form a
problematic pattern that suggests that the estimate is based on unreasonable assumptions, even if each assumption appears reasonable on its own.
177
178 Griffith

a relational cue from a specialist and situational factors indicating higher risk of misstatement on non-expert auditors’ problem
representations and evaluations of estimates.2
Auditors’ difficulty identifying patterns among assumptions suggests that their problem representations lack adequate
relational structure—the understanding of how assumptions relate to each other and to the estimate’s value (e.g., Rouse and Morris
1986). This relational structure helps one predict how changing one assumption will affect other assumptions and the estimate’s
final value. Specialists’ domain-specific expertise includes deep understanding of relationships among inputs to estimates, and
specialists often include details in their work that reveal relational structure cues by highlighting aggressive assumptions and their
impact on an estimate (Griffith 2016). These cues, if used by auditors, could improve auditors’ problem representations.
I expect that auditors will be more likely to benefit from a specialist-provided relational cue when one or more risk
indicators—situational factors indicating heightened risk of misstatement without identifying a specific problem—increase their
epistemic motivation to scrutinize a client’s estimate (e.g., Kruglanski 1990). Auditors sometimes ignore the additional details
in specialists’ work or treat contrary evidence in specialists’ work as irrelevant to overall audit conclusions (Griffith et al.
2015a; Glover et al. 2017b). These behaviors suggest that unless a situational factor prompts additional scrutiny, auditors are
unlikely to dig into the details of specialists’ work (e.g., Joe, Wu, and Zimmerman 2017b). Consistent with the Elaboration
Likelihood Model (ELM) (Petty, Wegener, and Fabrigar 1997), which posits that one’s scrutiny of a message increases with
the personal risks and benefits of accurate assessment, I expect an indicator of higher risk to increase auditors’ scrutiny of
specialists’ analysis of the estimate. Many factors in the audit environment can suggest a higher risk of misstatement in general
without diagnosing a specific source of misstatement in an account. Because there is no objectively correct value for an estimate
until it is realized, which often occurs long after the audit is finished (Griffith et al. 2015a), auditors must identify specific
problems within estimates to justify proposed adjustments and persuade management that the current estimate is unreasonable.
In this study, I examine two risk indicators: client source credibility and engagement risk. Source credibility is the
perceived competence and objectivity of a source (Pornpitakpan 2004), so low client source credibility implies a higher risk of
errors or bias in an estimate. Engagement risk is the risk of potential harm to the audit firm as a result of an audit engagement
(Hackenbrack and Nelson 1996). It includes audit risk and auditor and client business risks (Colbert, Luehlfing, and Alderman
1996). While client source credibility and engagement risk can increase auditors’ ex ante perceived risk of misstatement,
neither risk indicator diagnoses specific problems within an estimate that auditors would need to identify in order to correct a
misstatement. I expect these risk indicators to prompt auditors to use a relational cue to improve their problem representations
and judgments about estimates when they suggest higher risk.
I test this expectation in two 2 3 2 between-participants experiments in which auditors evaluate a biased estimate
underlying a client’s goodwill impairment analysis. The client concludes that the estimated fair value exceeds book value and,
thus, goodwill is not impaired. The assumptions underlying the estimate appear reasonable individually, but jointly, they form a
pattern of management bias that implies the estimate is materially overstated.
In both experiments, I manipulate the presence or absence of a relational cue by giving participants a specialist’s memo
with or without commentary noting that two assumptions fall at the aggressive ends of the ranges deemed reasonable by the
specialist. Regardless of the presence or absence of the relational cue, the specialist’s memo documents that the two
assumptions are aggressive, but the specialist concludes that the assumptions are (individually) reasonable. The relational cue
does not identify a misstatement within the estimate or conclude that an individual assumption is unreasonable. It only specifies
a relationship between two of the assumptions suggestive of a pattern of bias. I test whether auditors develop richer problem
representations and better recognize the misstatement implied by the pattern of bias when they receive a relational cue and a
different risk indicator in each experiment. Neither risk indicator pinpoints the source of misstatement. In Experiment 1, I
measure auditors’ views of the client’s source credibility to examine the association between this risk indicator and auditors’
use of the relational cue. In Experiment 2, I manipulate engagement risk as a risk indicator to provide causal evidence
consistent with the results of Experiment 1.
In Experiment 1, auditors who rate client source credibility lower have richer problem representations, measured by the
extent of relationships and inferences in free recalls of case information, and judge the estimate as less reasonable only when
they receive a relational cue. Results of a structural equations model are consistent with the predicted process, in which the joint
effect of a relational cue and lower source credibility flows from problem representations to identification of potential issues in
the estimate to reasonableness judgments to adjustment recommendations. Experiment 2 finds similar results using engagement
risk as a manipulated risk indicator. This experiment demonstrates that the beneficial effect of the relational cue on problem
representations holds with a risk indicator less likely than source credibility to suggest management bias in an estimate. These
theory-consistent results suggest that auditors’ judgments and decisions about estimates benefit from a relational cue, but only
when auditors incorporate the cue into their problem representations, which is more likely in the presence of a risk indicator.

2
I use the term ‘‘specialist’’ to refer to a valuation specialist employed by the audit firm who assists the audit team in an audit of a complex estimate.

The Accounting Review


Volume 93, Number 4, 2018
When Do Auditors Use Specialists’ Work to Improve Problem Representations of and Judgments about Complex Estimates? 179

This study makes contributions of interest to researchers, regulators, and practitioners. It reveals conditions under which
auditors effectively use information in their specialists’ work, an area of concern to regulators and auditors (PCAOB 2015;
Griffith et al. 2015a). The specialist’s cue can help auditors relate the specialist’s results to the results of testing other
assumptions and to the estimate overall. However, auditors appear to disregard the specialist’s cue when situational factors
indicate lower risk. This result suggests that auditors do not always effectively use information from specialists, despite
specialists’ domain-specific expertise that enables them to identify potential issues that auditors would otherwise miss.
Auditors’ use of the cue only when an indicator suggests higher risk shows that auditors’ failure to use their specialists’ work
can be attributed, in part, to situational factors, in addition to auditors’ lack of valuation knowledge, the type of domain-specific
expertise blamed for many of auditors’ problems auditing estimates (Griffith et al. 2015a). This result demonstrates the
importance of considering how situational factors affect auditors’ epistemic motivation when trying to understand how auditors
use specialists’ work and, more broadly, how auditors incorporate available cues into their problem representations. The role of
epistemic motivation in auditors’ use of the cue informs the design of interventions aimed at improving auditors’ problem
representations temporarily (e.g., decision aids) or permanently (e.g., training). To be effective, these interventions likely
require consideration of auditors’ epistemic motivation, in addition to consideration of the deficiencies in their problem
representations. Such interventions might also be useful in other domains where pattern recognition is important, such as fraud
detection and analytical procedures.
This study also shows that richer problem representations of complex estimates help auditors recognize when assumptions
collectively suggest an issue, but do not appear problematic individually. Recent research examines how to promote evidence
integration to improve audits of estimates (Wolfe, Christensen, and Vandervelde 2017; Griffith, Hammersley, Kadous, and
Young 2015b; Kadous and Zhou 2016; Joe, Vandervelde, and Wu 2017a; Joe et al. 2017b). Integrative thinking also helps
auditors identify fraud patterns that are not apparent when cues are considered individually (Hoffman and Zimbelman 2009;
Simon 2012). The richer problem representations caused by attending to a relational cue help auditors integrate evidence.
Last, this study shows that careful use of specialists’ work can improve non-expert auditors’ problem representations.
Industry-specialist auditors form richer problem representations that aid pattern recognition, but auditors without domain-
specific expertise have trouble recognizing problematic patterns among evidence (Hammersley 2006; Hammersley, Johnstone,
and Kadous 2011). Understanding how relational cues in specialists’ work aid non-expert auditors’ problem representations is
important because auditors are unlikely to propose adjustments or persuade clients to adjust estimates without identifying
specific problems (Cannon and Bedard 2017).
The remainder of this study proceeds as follows. Section II develops the hypotheses. Sections III and IV describe each
experiment used to test the hypotheses and present results. Section V discusses the implications and future research directions
suggested by this study.

II. HYPOTHESES DEVELOPMENT

Background on Auditing Complex Estimates


Due to estimates’ inherent subjectivity and uncertainty, auditors evaluate their overall reasonableness, rather than verify
their accuracy, by evaluating an estimate’s model, inputs, and assumptions (PCAOB 2009; Griffith et al. 2015a). Assumptions
about future performance, discount rates, and industry conditions are subjective and difficult for auditors to evaluate and,
consequently, could be biased by management (Martin, Rich, and Wilks 2006; Lundholm 1999). Some misstatements in
estimates only become evident when auditors consider assumptions in combination. Individually reasonable assumptions might
contradict each other (e.g., increasing sales, but decreasing cost of sales) or form a pattern that suggests bias (e.g., several
assumptions at the estimate-increasing end of the range) (PCAOB 2011, }51–55; Griffith et al. 2015a, 2015b).3
The prevalence of estimates has led accounting firms to employ valuation specialists to assist auditors in assessing the
reasonableness of some elements in estimates (Martin et al. 2006; Smith-Lacroix, Durocher, and Gendron 2012). Auditors
report that these specialists have valuation knowledge that auditors at all ranks, but especially those with less experience, often
lack (Griffith et al. 2015a; Griffith 2016). Valuation knowledge is an important type of domain-specific expertise needed to
audit estimates (Griffith 2016).4 Even audit partners who highly rate their own valuation knowledge rely on specialists to help

3
Auditing standards define a misstatement in an estimate as the difference between a value that falls outside a reasonable range and the nearest edge of
the reasonable range (PCAOB 2010a). Patterns among assumptions that fall within reasonable ranges can also indicate misstatements as described
above (PCAOB 2011). This study examines the latter case.
4
Consistent with Hammersley (2006), I define domain-specific expertise as the knowledge needed to develop a rich problem representation. In the
estimates setting, valuation knowledge is one type of domain-specific expertise needed to assess complex estimates involving assumption-based
valuation models (Griffith et al. 2015a). Valuation models underlying estimates can use income, market, or asset approaches. While this study uses an
income approach (i.e., discounted cash flow) model, auditors need valuation knowledge to understand and assess any of these models.

The Accounting Review


Volume 93, Number 4, 2018
180 Griffith

them audit the most challenging estimates (Glover et al. 2017b). Specialists typically evaluate the estimation model and
assumptions about discount rates, market benchmarks, and general industry or economic trends, all of which have significant
subjectivity (Griffith 2016; Glover et al. 2017a). In contrast, auditors typically evaluate assumptions about clients’ financial
measures, such as future revenues and expenses (Griffith 2016). Specialists generally do not conclude on an estimate overall
because they do not have access to all of the client’s information (Boritz et al. 2017; Griffith 2016). Rather, specialists conclude
on each assumption that they examined, often by stating whether the assumption falls within a reasonable range (Griffith 2016).
Auditors are ultimately responsible for concluding whether an estimate is materially misstated (PCAOB 2003, }12–14), even
though they perform only a subset of the procedures to test the estimate. This division of labor exposes auditors to regulators’
criticisms that they over-rely on specialists’ work without fully understanding how the details of that work support the overall
conclusion (Glover et al. 2017a).5 The division of labor also makes auditors, more than specialists, responsible for recognizing
problematic patterns among the assumptions—whether they were tested by the specialist or the audit team.

The Role of Problem Representations in Identifying Misstatements in Complex Estimates


Auditors who recognize patterns among assumptions are more apt to identify misstatements in estimates. Pattern
recognition is more likely when an auditor has a rich problem representation into which she can ‘‘fit’’ available cues to test if
they form a pattern (Bedard and Biggs 1991b; Bierstaker, Bedard, and Biggs 1999; Hammersley 2006). Problem
representations combine existing knowledge with incoming stimuli (Chi, Glaser, and Rees 1982; Greeno 1989). Problem
representations’ relational structures (i.e., understanding of how elements in the problem representation fit together) allow
people to mentally integrate information to make inferences about it (Chi et al. 1982; Rouse and Morris 1986). Integrating and
making inferences from the evidence related to an estimate’s assumptions involves knowledge of the client, industry, valuation
models, and their interactions. Industry-specialist auditors have richer problem representations that aid pattern recognition
(Bedard and Biggs 1991a; Hammersley 2006). Auditors’ lower valuation knowledge relative to specialists (Martin et al. 2006;
Griffith et al. 2015a; Boritz et al. 2017; Glover et al. 2017b) implies that auditors’ problem representations and pattern
recognition could benefit from incorporating specialists’ cues.
Interventions can help improve non-experts’ problem representations by providing task-relevant information (Rouse and
Morris 1986) or suggesting what to consider next (Bierstaker et al. 1999). Prompts to think about relationships improve
recognition of problematic patterns, consistent with improved problem representations (Hoffman and Zimbelman 2009;
Schultz, Bierstaker, and O’Donnell 2010; Brewster 2011; Simon 2012). Non-expert auditors struggle to identify problematic
patterns among assumptions underlying estimates (PCAOB 2010b, 2011; Griffith et al. 2015a, 2015b), suggesting that their
problem representations lack relational structure. Cues that inform relational structure can help auditors form richer problem
representations of estimates.
Specialists’ work often contains relational cues in the form of detailed commentary that can help auditors understand how
assumptions tested by the specialist affect other assumptions and the estimate overall (Griffith 2016). These cues illustrate
relationships or patterns. For example, a specialist’s work might emphasize how a change in an assumption would impact an
estimate or note that multiple assumptions are aggressive. Such relational cues can help auditors develop richer problem
representations, which can help auditors integrate evidence patterns by providing a mental template for how the assumptions
collectively impact the estimate.

The Role of Situational Factors in Auditors’ Problem Representation Development


Relational cues will not lead to richer problem representations if auditors overlook them. Consistent with a check-the-box
approach (e.g., Boritz et al. 2017; Griffith et al. 2015a, 2015b), sometimes auditors scan specialists’ memos for conclusions
without carefully considering supporting details (Griffith 2016). Auditors, at times, ignore the additional details in specialists’
work, treat contrary evidence in specialists’ work as irrelevant to overall audit conclusions, and accept specialists’ conclusions
at face value (Griffith et al. 2015a; Glover et al. 2017b; Boritz et al. 2017). These behaviors are consistent with low epistemic
motivation (Kruglanski 1990). The Elaboration Likelihood Model (ELM) posits that people must be sufficiently motivated to
elaborate on message-related evidence to accurately assess a message (Petty et al. 1997; Crano and Prislin 2006; Bohner and
Dickel 2011).6 Elaboration involves incorporating external cues with existing knowledge (Crano and Prislin 2006; Petty and
Cacioppo 1986; Petty et al. 1997), which is how a problem representation forms. The aforementioned failures by auditors to

5
Specialists likely adopt a piecemeal attitude and role in the audit that allows auditors to ignore certain details of specialists’ work because specialists’
involvement in the review phase of the audit is limited (Boritz et al. 2017) and/or specialists’ main revenue-generating activities within the firm pertain
to preparing estimates for nonaudit clients, rather than assisting on audit engagements (Barr-Pulliam, Joe, Mason, and Sanderson 2017).
6
Consistent with this idea, Rich (2004) relies on the ELM to predict and find that audit reviewers elaborate more on preparers’ work when reviewers
expect more preparer errors and feel more accountable to financial statement users.

The Accounting Review


Volume 93, Number 4, 2018
When Do Auditors Use Specialists’ Work to Improve Problem Representations of and Judgments about Complex Estimates? 181

appropriately use their specialists’ work suggest that auditors often lack motivation to elaborate on specialists’ work. I expect
that auditors will incorporate a relational cue into their problem representations to the degree that situational factors motivate
them to elaborate.7
I expect situational factors that indicate a higher risk of misstatement to increase auditors’ motivation to elaborate on the
evidence related to estimates, including specialists’ relational cues, because motivation to elaborate is a function of the personal
risks and benefits of accurately assessing a message (Petty et al. 1997; Petty and Cacioppo 1986).8 Risk indicators, such as
client source credibility or engagement risk, can elevate ex ante beliefs about the risk of misstatement without identifying
specific issues, similar to general fraud red flags that do not suggest specific fraud hypotheses (Hammersley 2011; Hammersley
et al. 2011). Source credibility refers to one’s belief that a source provided accurate and objective information (Pornpitakpan
2004). Low client source credibility suggests to auditors that a material misstatement more likely exists (Hirst 1994; Anderson,
Kadous, and Koonce 2004), but it does not indicate which of an estimate’s assumptions, if any, are aggressive. Engagement
risk refers to the risk that an audit results in adverse consequences to the firm (Hackenbrack and Nelson 1996); it includes audit
risk, client business risk, and auditor business risk (Colbert et al. 1996). While high engagement risk can increase the perceived
risk of misstatement, it does not indicate a specific problem or misstatement, or even necessarily suggest low client accuracy or
objectivity.
Auditors’ problem representations and subsequent judgments are more likely to benefit from a relational cue if auditors are
motivated to elaborate and thereby incorporate the cue into their problem representations. Based on the ELM, I expect an
indicator of higher risk to increase auditors’ epistemic motivation to elaborate. However, recall that auditors seem to lack the
valuation knowledge they need to use specialists’ work effectively (Griffith et al. 2015a; Griffith 2016; Boritz et al. 2017;
Glover et al. 2017b). Auditors’ lack of valuation knowledge implies that even if auditors are motivated to elaborate on
specialists’ work, these efforts are less likely to result in successful elaboration (i.e., richer problem representations that aid
pattern recognition) if auditors do not receive some ‘‘help,’’ such as a relational cue to direct problem representation
development. In the absence of a cue, high motivation to elaborate is less likely to result in improved problem representations.
In settings that require pattern recognition, even when auditors attempt to increase their effort in response to signals warranting
an increase, they do not do so effectively absent relatively high task-specific experience (Hammersley et al. 2011; Joe et al.
2017a).9 On the other hand, if motivation is not needed to prompt use of the relational cue, then the cue alone would improve
problem representations regardless of the presence of an indicator of higher risk. However, variation in auditors’ use of
specialists’ work (PCAOB 2015) implies an opportunity for situational factors that affect epistemic motivation to improve use
of relational cues. Figure 1 illustrates the interaction of a relational cue and a risk indicator predicted below and in H2.
H1: Auditors who receive (versus do not receive) a relational cue will develop richer problem representations when a
situational factor indicates higher risk, but the effect of a relational cue will be weaker when a situational factor
indicates lower risk.
I expect the richer problem representations resulting from the interaction of a relational cue and a risk indicator to improve
auditors’ judgments and decisions about estimates by helping them recognize potential issues in an estimate. Auditors struggle
to recognize when individually biased assumptions cumulatively have a material impact on an estimate (PCAOB 2011; Griffith
et al. 2015a). Individual pieces of evidence might not suggest problems on their own and only suggest a misstatement when
considered together (Brown and Solomon 1990, 1991; Bedard and Biggs 1991b; Hammersley 2006). Richer problem
representations should help auditors consider how different pieces of evidence relate to one another and impact an estimate.
Identification of valid issues contributes to improved auditor judgments and actions (Hoffman and Zimbelman 2009;
Hammersley 2011; Hammersley et al. 2011; Griffith et al. 2015b). I expect the interaction of a relational cue and a risk indicator
to improve auditors’ judgments about an estimate as a result of identifying more valid potential issues in the estimate.
H2: Auditors who receive (versus do not receive) a relational cue will judge a biased estimate as less reasonable when a
situational factor indicates higher risk, but the effect of a relational cue will be weaker when a situational factor
indicates lower risk.

7
Existing knowledge and beliefs also likely influence an auditor’s motivational response to situational factors. Random assignment of auditors to the cue
conditions in Experiment 1 and to all experimental conditions in Experiment 2 controls for these factors, which fall outside the scope of this study.
8
To the extent that risk indicators represent conditions calling for increased skepticism (e.g., Nelson 2009), increased elaboration might be viewed as
increased skepticism.
9
While most auditors recognize heightened risk in Hammersley et al. (2011), only those with greater fraud experience effectively respond to that risk by
choosing procedures that would uncover the fraud. In my study, even if auditors recognize the higher risk signaled by the indicator, recognition of risk
alone is not likely to be sufficient for developing richer problem representations and identifying the pattern of bias suggesting misstatement.

The Accounting Review


Volume 93, Number 4, 2018
182 Griffith

FIGURE 1
Form of Hypothesized and Observed Interactions

III. EXPERIMENT 1
In Experiment 1, I manipulate the presence or absence of a relational cue and I measure auditors’ views of source
credibility as a risk indicator.10 A total of 105 experienced senior auditors from three Big 4 firms participated while attending
firm-sponsored training.11,12 In practice, senior auditors often evaluate assumptions related to estimates (Griffith et al. 2015a).
On average, participants in Experiment 1 have 3.6 years of experience and have used discounted cash flow models on 1.7
audits. Table 1 reports additional demographic data describing participants in both experiments. Overall, participants appear to
have sufficient experience with fair values and specialists to meaningfully complete the experiment, and have backgrounds
similar to participants in other studies examining auditors’ evaluation of estimates (e.g., Griffith et al. 2015b; Joe et al. 2017a,
2017b).

Task
Participants evaluated audit evidence related to a client’s annual goodwill impairment test. The client uses a discounted
cash flow model to estimate the fair value that is compared to book value in the impairment test. The model contains five key
assumptions that are reasonable individually, but collectively form a pattern of bias. Participants received a summary of the

10
I also manipulated the initial preparer of the client’s estimate as in-house or third party, which is marginally correlated with source credibility ratings (r
¼ 0.163, two-tailed p ¼ 0.10) and is not correlated with source credibility condition (discussed below in the ‘‘Independent Variables’’ subsection) (r ¼
0.135, two-tailed p ¼ 0.17). These results do not allow confidence that preparer type effectively manipulates the theoretical source credibility construct,
so I collapse results across preparer type in the analyses.
11
The number of participants included in analyses of each dependent variable ranges from 103 to 104 due to missing responses to individual dependent
measures.
12
Both experiments received Institutional Review Board (IRB) approval from the universities where they occurred.

The Accounting Review


Volume 93, Number 4, 2018
When Do Auditors Use Specialists’ Work to Improve Problem Representations of and Judgments about Complex Estimates? 183

TABLE 1
Participant Demographics: Education and Experiencea
Mean (Median) [Standard Deviation]
Experiment 1 Experiment 2
Item n ¼ 105 n ¼ 165
General Education and Experience
Experience (years) 3.61 3.61
(3.00) (3.67)
[1.248] [1.251]
Proportion with CPA license or equivalent 83.8% 71.3%
(1.00) (1.00)
[0.370] [0.454]
Proportion with Master’s degree 51.6%
* (1.00)
[0.501]
Task-Specific Knowledge and Experience
Proportion that understand directional impact of change in 81.3% 65.2%
discount rate on a fair valueb (1.00) (1.00)
[0.392] [0.478]
Number of audits involving discounted cash flow models 1.67 2.02
(1.00) (1.00)
[1.780] [4.554]
Number of audits involving a firm valuation specialist 1.91 2.83
(2.00) (2.00)
[1.942] [3.370]
Number of audits of goodwill impairment analysis 1.04 0.98
(1.00) (0.00)
[1.365] [3.717]
Number of audits where client used a third-party preparer 0.823
(0.00) *
[1.458]
Number of audits in which client was close to violating 0.89
debt covenants * (0.00)
[2.788]
* Indicates the demographic measure was not collected for this group.
a
No demographic measures vary significantly across conditions, so I do not report cell means.
b
After completing all dependent measures, participants answered the multiple choice question ‘‘How would an increase in the discount rate affect the
reporting unit’s fair value?’’ Response options included: increase the fair value, decrease the fair value, no effect on the fair value, and I do not know.

three assumptions tested by the audit team and the two assumptions tested by the specialist. Participants reviewed the work
performed and made a preliminary conclusion about the estimate.
The case, adapted from Griffith et al. (2015b), includes client background information, the client’s Step 1 goodwill
impairment test and discounted cash flow model supporting the fair value used in the test, a planning memo that identifies the
five key assumptions and who tested each one (audit team or specialist), and the work performed by the audit team and the
specialist. The client’s Step 1 goodwill impairment test shows a fair value of $670 million and book value of $590 million; fair
value exceeds book value, so goodwill is not impaired.13 The audit team’s work contains the results of testing three
assumptions: projected revenue, operating expenses, and capital expenditures. All three assumptions fall within ranges
considered reasonable by the audit team, but projected revenue and operating expenses fall at the aggressive (i.e., estimate-
increasing) ends of the ranges. The specialist’s work contains the results of testing two assumptions: discount rate and long-
term growth rate. Both assumptions fall within ranges considered reasonable by the specialist, but at the aggressive ends. In
total, four out of five assumptions used by the client are aggressive. This pattern strongly suggests misstatement in the estimate

13
While the case states that the goodwill balance is quantitatively and qualitatively significant to the client, it does not explicitly label the estimate as a
significant risk or assign a preliminary inherent risk classification.

The Accounting Review


Volume 93, Number 4, 2018
184 Griffith

TABLE 2
Quantitative Impact of Adjusting Assumptions
Reduction in Fair Value
Assumption Indicated by Sensitivity Analysis Tested by
Projected revenue 39.1 million Audit team
Projected operating expenses 36.4 million Audit team
Projected capital expenditures 8.9 million Audit team
Discount rate 10.7 million Specialist
Long-term growth rate 6.7 million Specialist
The client’s fair value of $670 million exceeds book value by $80 million. Thus, a reduction of the fair value of $80 million or more will cause the client to
fail Step 1 of the goodwill impairment analysis, which leads to an income-reducing goodwill impairment charge.

because a higher estimate increases the client’s ability to pass Step 1 of the annual goodwill impairment test and avoid an
impairment charge.14 I hold this pattern of evidence constant across conditions in order to examine whether the combination of
a relational cue and a risk indicator causes auditors to interpret the same evidence differently.
The work performed includes sensitivity analyses showing the impact of adjusting each assumption to a less aggressive
position. Table 2 shows that the impact of adjusting any assumption individually does not change the outcome of the Step 1
test. However, if both of the aggressive assumptions tested by the audit team and either of the aggressive assumptions tested by
the specialist are adjusted, then the resulting change in the fair value causes the client to fail the Step 1 test. The cumulative
impact on the estimate of using less aggressive assumptions reduces the estimate to an amount less than book value that results
in failing Step 1 of the goodwill impairment test. Evaluating the impact of adjusting each assumption individually does not
reduce the estimate by an amount that changes the outcome of the Step 1 test.
After reading the case, participants assessed the likelihood that the estimate is fairly stated, decided whether they would
recommend to their manager that the client adjust the estimate, and identified potential issues in the estimate. Next, they put
away the case and completed a surprise free recall of the information that was important in their decisions about the estimate,
additional questions about the case, and demographic questions.

Independent Variables
Relational Cue
I manipulated the presence or absence of a relational cue in the specialist’s work. In both the relational cue and no
relational cue conditions, the specialist’s work documents testing of the discount rate followed by the conclusion, ‘‘Based on
the procedures performed, we conclude that [client name]’s discount rate appears reasonable,’’ and then documents testing of
the long-term growth rate followed by a similar conclusion about the long-term growth rate. In the relational cue condition, the
specialist includes the following statement after the second conclusion: ‘‘We note that the discount rate and long-term growth
rate used by [client name] both fall at the aggressive (i.e., fair value-increasing) ends of our reasonable ranges’’ (emphasis in the
original). The specialist’s work in the no relational cue condition excludes this statement. However, the case contains the
information necessary to convey to participants that the discount rate and long-term growth rate are aggressive without this
statement.15 Appendix A illustrates the placement and emphasis of the relational cue within the specialist’s work.

Client Source Credibility


I measured auditors’ views of client source credibility. Source credibility is the degree to which one believes a source
provided accurate and unbiased information. It increases with expertise, objectivity, and attractiveness (Pornpitakpan 2004;
Birnbaum and Stegner 1979). In auditing, source credibility is affected by clients’ integrity relative to other clients (Peecher

14
An audit partner and a senior manager from different firms, both with extensive experience auditing goodwill impairments where specialists were
involved, reviewed the case and identified this pattern as indicative of misstatement in the estimate due to management bias. I use these auditors’ views
as a benchmark because the audit team, specifically, the engagement partner, is ultimately responsible for concluding on the overall estimate, and these
auditors have experience making such conclusions.
15
In both relational cue conditions, the specialist’s work clearly shows that the client’s discount rate and long-term growth rate fall on the aggressive side
of the midpoints of the specialist’s reasonable ranges. The specialist’s work also includes sensitivity analyses that clearly show that moving the client’s
discount rate and long-term growth rate toward the midpoints of the ranges would reduce the client’s fair value.

The Accounting Review


Volume 93, Number 4, 2018
When Do Auditors Use Specialists’ Work to Improve Problem Representations of and Judgments about Complex Estimates? 185

TABLE 3
Ratings of Source Credibility Dimensions in Experiment 1a

Panel A: Descriptive Statistics for Ratings of Source Credibility Dimensions: Mean (SE)
Condition Expertise Objectivity Sophistication
Lower credibility 5.40 3.26 4.67
(n ¼ 53) (0.209) (0.198) (0.207)
Higher credibility 7.61 6.34 6.47
(n ¼ 51) (0.128) (0.243) (0.138)

Panel B: t-tests of Differences in Ratings of Source Credibility Dimensions


Two-Tailed
Dimension t102 p-value
Expertise 8.92 ,0.001
Objectivity 9.84 ,0.001
Sophistication 7.20 ,0.001
a
Expertise, Objectivity, and Sophistication are the ratings from participants’ responses on three 11-point Likert scales that ask about the expertise and
objectivity of the preparer of the estimate and the sophistication of the client. Each scale is anchored by 0, very low, and 10, very high.

1996; Goodwin 1999); clients’ incentives to manage earnings (Anderson et al. 2004); source competence (Bamber 1983); and
whether the source is a member of the audit firm, client management, or an external organization (Hirst 1994; Joyce and Biddle
1981; Goodwin and Trotman 1996). Auditors rely more on information from more credible sources (Hirst 1994; Anderson et
al. 2004), consistent with source credibility functioning as a risk indicator encouraging less scrutiny of information from a more
credible source.
I create a composite source credibility score from participants’ responses to three items that ask about the client’s expertise,
objectivity, and sophistication.16 I collected all measures on 11-point Likert scales anchored by 0, very low, and 10, very high.
A confirmatory factor analysis finds that one factor is explanatory of these data, so I use the first factor as the composite
measure of client source credibility.17
I divide participants at the median source credibility rating of 0.06 into lower and higher source credibility conditions.18
The mean (standard deviation) of 0.79 (0.64) in the lower credibility condition is significantly lower than the mean (standard
deviation) of 0.82 (0.53) in the higher credibility condition (t102 ¼ 13.93, one-tailed p , 0.001). Table 3 reports ratings of
expertise, objectivity, and sophistication across source credibility conditions; each dimension is significantly higher in the
higher credibility condition.

Dependent Variables
Measures Used in Tests of Hypotheses
To test H1, I measured the richness of auditors’ problem representations through free recalls. After completing the case and
putting it away, participants completed a surprise free recall in which they listed information from the case that was important
to their decisions about the client’s estimate. Surprise free recalls are commonly used to measure problem representations (e.g.,
Christ 1993; Bierstaker et al. 1999; Hammersley 2006). Richer problem representations contain more abstract knowledge and
inferences made from given information (Chi et al. 1982; Christ 1993; Di Sessa 2014). A doctoral student with auditing

16
When measuring the attractiveness dimension of source credibility, psychology research focuses on physical attractiveness (e.g., Joseph 1982; Ohanian
1990), which is not relevant in the context of this study. Instead, I rely on the concept of organizational attractiveness as the basis for asking about
sophistication of the client organization (Lievens, Decaesteker, Coetsier, and Geirnaert 2001; Berthon, Ewing, and Hah 2005). Berthon et al. (2005)
define organizational attractiveness as the benefits (social status, development opportunities, economic benefits, etc.) that a person anticipates receiving
if they associate with an organization. A more sophisticated client offers more benefits, such as prestige and development opportunities, than a less
sophisticated client and is, therefore, more attractive.
17
The eigenvalue for the first factor is 2.11 and all others are less than 1. The first factor explains 70 percent of the variance in the measures. The factor
loadings for expertise, objectivity, and sophistication are 0.81, 0.86, and 0.85, respectively. The Cronbach alpha score, which captures the reliability of
the source credibility measure, is 0.79 and is within accepted ranges (Peterson 1994).
18
Inferences do not change if I divide participants at the mean source credibility rating of 0.00 into lower and higher source credibility conditions.

The Accounting Review


Volume 93, Number 4, 2018
186 Griffith

experience and I independently coded each item listed as (1) recalling information given in the case, (2) combining given
information with other knowledge to make an inference, or (3) other (e.g., factually incorrect items).19 I use the proportion of
items coded into the second category relative to total items to measure the richness of problem representations, because this
reflects participants’ understanding of relationships among evidence that allows them to make inferences about the evidence
(Chi et al. 1982; Petty and Cacioppo 1986; Zhang 1997).20 Problem representations with a higher proportion of inferential
items reflect a deeper structural understanding of the situation (Chi et al. 1982; Di Sessa 2014), which is the element in
auditors’ problem representations (i.e., relational structure) that I expect the relational cue to enhance.
To test H2, I measured auditors’ judgments about the estimate. Participants responded to the question ‘‘How likely is it that
[client name]’s fair value is fairly stated?’’ on an 11-point Likert scale anchored by 0, not likely at all, and 10, extremely likely.

Other Dependent Measures


I collected additional dependent measures to further examine the theorized process. First, I measured auditors’
identification of potential issues in the estimate to gain insight into the process by which problem representations influence
judgments. After reading the case, but before putting it away, participants listed their concerns about the estimate, if any, and
the procedures they would perform to address them. A doctoral student with auditing experience and I coded each item as a
valid issue if following up on it could lead to identifying and quantifying possible misstatements in the estimate indicated by
inconsistencies or patterns among assumptions.21 If not, then it was coded as ‘‘other.’’22 Participants rated the extent of
management bias in the estimate on an 11-point Likert scale anchored by 0, not at all biased, and 10, extremely biased.23
I also measured auditors’ decisions about recommending that the client adjust its estimate to examine the links among
auditors’ problem representations, judgments, and decisions. Participants responded yes or no to the question ‘‘Would you
recommend to your manager that [client name] adjust its fair value?’’ after making their judgments about the estimate.

Experiment 1 Results
Manipulation Check
To test the relational cue manipulation, participants recalled what the specialist’s memo said about the discount and long-
term growth rates by choosing all that apply from a list of statements describing each rate as reasonable, unreasonable, at the
conservative end of the range, or at the aggressive end of the range. A significantly greater proportion of participants in the
relational cue condition chose ‘‘aggressive’’ at least once as compared to the no relational cue condition (83 percent versus 13
percent, v12 ¼ 50.81, p , 0.001), indicating a successful manipulation.24

Tests of Hypotheses about the Effects of a Relational Cue and Source Credibility
The hypotheses predict that a relational cue will improve auditors’ problem representations and their judgments about a
biased estimate, but more so when a situational factor indicates higher (versus lower) risk. These predictions imply a causal
model in which richer problem representations increase identification of potential issues, which, in turn, reduces the assessed
likelihood that the biased estimate is fairly stated and increases the likelihood that auditors take an appropriate next action. I
first test the hypothesized ordinal interaction using contrast coding (Buckless and Ravenscroft 1990). Figure 1 presents all
results graphically to illustrate each interaction’s form. I then estimate a structural equations model to confirm that the results
are consistent with the predicted process.

19
We coded all of the data for this study while blind to experimental condition, and the non-author coder was blind to hypotheses. We met to resolve
differences, and I report the resolved data here. Inter-rater agreement was 91 percent and Cohen’s kappa was 0.87 (p , 0.001).
20
Examples of items coded as recalling given information include ‘‘management’s projections and the reasoning behind them’’ and ‘‘the discount rate and
inflation rate used.’’ Examples of items coded as inferences include ‘‘management bias—management was very optimistic in each category, noting
higher revenue, less expenses, and increased growth,’’ ‘‘certain assumptions were on the higher sides of the ranges,’’ and ‘‘our recalculation clearly
shows above average projected revenue growth and below average expense growth.’’ While some free recalls explicitly identify a pattern of
management bias (e.g., the first inference example above), other relational statements that do not explicitly identify management bias also reflect rich
problem representations (e.g., the second and third inference examples above).
21
Examples of items in this category include wanting to perform a combined sensitivity analysis that evaluates the collective impact of changes to
multiple assumptions, obtain external evidence to support the client’s projected increase in revenue based on a planned new product launch, or evaluate
management’s forecasting ability in light of historical inaccuracy.
22
Inter-rater agreement was 84 percent and Cohen’s kappa was 0.56 (p , 0.001).
23
Bias ratings are not correlated with identification of potential issues (r ¼ 0.111, p ¼ 0.261), although they are significantly correlated with judgments
about the estimate (r ¼ 0.546, p , 0.001). I discuss bias rating results further in footnote 30.
24
I report one-tailed p-values for predicted directional tests.

The Accounting Review


Volume 93, Number 4, 2018
When Do Auditors Use Specialists’ Work to Improve Problem Representations of and Judgments about Complex Estimates? 187

TABLE 4
Richness of Auditors’ Problem Representations in Experiment 1a

Panel A: Descriptive Statistics for Richness of Problem Representations: Mean (SE) [n]
No
Relational Cue Relational Cue
Lower credibility 0.26 0.40 0.33
(0.060) (0.075) (0.049)
[26] [27] [53]
Higher credibility 0.21 0.19 0.20
(0.073) (0.056) (0.046)
[25] [25] [50]
0.24 0.30
(0.047) (0.049)
[51] [52]

Panel B: Two-Way ANOVA


Two-Tailed
Source of Variation df MS F p-value
Relational cue 1 0.08 0.72 0.397
Source credibility 1 0.45 3.93 0.050
Relational cue  Source credibility 1 0.15 1.31 0.256
Error 99 0.12

Panel C: Planned Contrasts


Planned Contrast F1,99 p-value
H1: Relational cue/lower credibility condition . all other conditions* 5.44 0.011
Pairwise Contrasts:
Relational cue vs. no relational cue in lower credibility condition* 2.05 0.078
Relational cue vs. no relational cue in higher credibility condition 0.04 0.839
Lower credibility vs. higher credibility in relational cue condition* 4.93 0.015
Lower credibility vs. higher credibility in no relational cue condition 0.35 0.556
Relational cue/lower credibility condition vs. no relational cue/higher 4.05 0.024
credibility condition*
* The one-tailed p-value is reported for this directional prediction.
a
Relational Cue is manipulated at two levels: no relational cue and relational cue. Source Credibility is based on a composite score from participants’
responses on three 11-point Likert scales that ask about the expertise and objectivity of the preparer of the estimate and the sophistication of the client.
Each scale is anchored by 0, very low, and 10, very high. I use the first factor from a factor analysis as the credibility score (eigenvalue of first factor ¼
2.11, and less than 1 for remaining factors; Cronbach’s alpha ¼ 0.79). Participants are split at the median into two levels of Source Credibility: lower and
higher. Richness of Problem Representation is the proportion of items listed in a surprise free recall after participants put away the case materials that
were coded as combining the given information with other relevant knowledge to understand relationships and make inferences relative to all items listed.

H1: Problem Representations


Table 4 reports descriptive statistics, an ANOVA with relational cue and source credibility as independent variables and
the richness of problem representations as the dependent variable, and the planned contrast that tests H1.25 Consistent with the
hypothesized pattern, auditors in the relational cue/lower credibility condition include more rich items in their problem
representations (M ¼ 0.40) than auditors in all other conditions (M ¼ 0.19–0.26).26 The planned contrast testing H1 in Panel C

25
Potential covariates include firm, experimental session, preparer type (in-house versus third party), and various measures of experience, knowledge, and
effort. None of the potential covariates that are correlated with the dependent variables are significant in the models testing H1 or H2 as main effects or
in all possible interactions with independent variables. As a result of these analyses, I do not include covariates in the Experiment 1 analyses.
26
Pairwise contrast tests reported in Table 4, Panel C indicate that these differences are marginally significant at p , 0.10 (one comparison) or significant
at p , 0.05 (two comparisons).

The Accounting Review


Volume 93, Number 4, 2018
188 Griffith

confirms that auditors in the relational cue/lower credibility condition develop significantly richer problem representations than
auditors in other conditions (p ¼ 0.011).27 The residual between-cells variation is insignificant (F2,99 ¼ 0.266, two-tailed p ¼
0.767), indicating that the hypothesized contrast explains the data well. The auditors’ problem representations support H1.28

H2: Auditor Judgments about the Estimate


Table 5 reports descriptive statistics, an ANOVA with relational cue and source credibility as independent variables, and
auditors’ assessments of how likely the estimate is to be reasonable as a dependent variable, and the planned contrast that tests H2.
Consistent with the hypothesized pattern, auditors in the relational cue/lower credibility condition assess the estimate as less
reasonable (M ¼ 4.44) than auditors in all other conditions (M ¼ 5.56–6.36).29 The planned contrast testing H2 in Panel C confirms
that auditors assess the estimate as significantly less reasonable in the relational cue/lower credibility condition than in other
conditions (p , 0.001). The residual between-cells variation is insignificant (F2,100 ¼ 0.94, two-tailed p ¼ 0.396), indicating that the
hypothesized contrast explains the data well. The auditors’ assessments of the likelihood that the estimate is reasonable support H2.

Test of Theorized Model


Figure 2 shows the theorized model of how relational cues affect auditors’ judgments and decisions about the estimate
across risk indicator conditions. My theory predicts that richer problem representations influence judgments by helping auditors
identify potential issues in the estimate, so I include issue identification in the causal path.30 I also include auditors’ decisions to
examine whether auditors’ judgments influence their decisions about how to proceed. Figure 2 shows the model including the
critical predicted paths (Links 1–4), as well as other possible paths that are consistent with the theory, but not explicitly
predicted. Links 1–4 must be significant to support the theorized process. If the other links are significant, they suggest partial,
rather than full, mediation, but do not invalidate inferences supporting the hypothesized causal process (Hayes 2013). I do not
predict full versus partial mediation based on my theory because a relational cue or risk indicator could have direct effects on
auditors’ judgments and decisions not contemplated by my theory, in addition to their indirect effects via problem
representations. For example, the problem signaled by the cue or the higher risk conveyed by the indicator could prompt
conservative, but heuristic, judgments due to risk as feelings (Slovic and Peters 2006; Nolder and Kadous 2017). I consider
Links 5–9, in addition to the critical path through Links 1–4, to identify the most parsimonious model consistent with a priori
theory (Kline 2011) while avoiding untested claims of full mediation. Links 5–9 capture the potential effects of the independent
and mediating variables incremental to the causal path depicted in Links 1–4.
I estimate the structural equations model in Figure 3 to test the theorized process, with client source credibility as the risk
indicator. I use a nested model comparison to test whether the effect of a relational cue differs between source credibility
conditions (Rigdon, Schumacker, and Wothke 1998). The model fits the data well.31,32 For parsimony, the model includes only
the significant paths from all of the theory-consistent paths shown in Figure 2 because the non-significant paths are not critical
to the theory and do not improve model fit (Kline 2011).33

27
The planned contrast with weights [þ3, 1, 1, 1] tests the specific ordinal prediction, based on the theoretical development, that a relational cue will
(not) cause different outcomes when an indicator of higher (lower) risk is present. A more general difference-in-differences ordinal prediction that a
relational cue will have a greater effect under higher versus lower risk, while allowing for the possibility of an effect of relational cue under lower risk,
is also consistent with the theory. However, the traditional ANOVA (disordinal) interaction term that would test the more general prediction is a low-
powered test of an ordinal prediction of a difference-in-differences, considering that a more specific prediction can be theoretically supported and tested
with an ordinal planned contrast (Buckless and Ravenscroft 1990; Guggenmos, Piercey, and Agoglia 2017). The alternative contrast weights [þ3, þ1,
1, 3] are consistent with the ordinal difference-in-differences prediction, but allow for an effect of cue in the lower-risk condition and an effect of
risk in the no cue condition. I test this alternative contrast for each hypothesis in both experiments and find significant results at p , 0.05 (H1 in
Experiment 1, H2 in Experiments 1 and 2) or p , 0.10 (H1 in Experiment 2), demonstrating that the results are robust to alternative specification.
28
I repeat these analyses using total items included in problem representations; as expected, the planned contrast is not significant (F1,99 ¼ 0.51, two-tailed
p ¼ 0.478). This provides further evidence that the relational cue in combination with an indicator of higher risk specifically improves the relational
structure in auditors’ problem representations, rather than merely increasing the quantity of elements included.
29
Pairwise contrast tests reported in Table 5, Panel C indicate that these differences are significant at p , 0.05 (one comparison) or p , 0.01 (two
comparisons).
30
I do not include auditors’ ratings of management bias in the model because the independent variables did not significantly affect bias alone or in
interaction (all two-tailed p . 0.50), and pairwise contrast tests find that auditors in the relational cue/lower credibility condition do not rate the estimate
as more biased than auditors in the other conditions (all two-tailed p . 0.50), indicating that this construct would not increase the model’s explanatory
power.
31
The traditional v2 test shows good fit (v132 ¼ 7.85, two-tailed p ¼ 0.853). The Root Mean Square Error of Approximation (RMSEA ¼ 0.00) and
Comparative Fit Index (CFI ¼ 1.00) are well within ranges indicating good fit (Byrne 2010; Hu and Bentler 1999).
32
The model uses counts, rather than proportions, for the problem representation dependent variable because this yields a better-fitting model. Overall
inferences about H1 and H2 do not change if proportions are used, although the nested model comparison testing the interaction is marginally
significant at p , 0.10 and the path from problem representations to issues is not significant.
33
Following a model trimming approach (Kline 2011), I first estimated the full model shown in Figure 2. While model fit is good (v82 ¼ 5.23, two-tailed p
¼ 0.733, RMSEA ¼ 0.00, and CFI ¼ 1.00), three non-critical paths (Links 6, 8, and 9) are not significant. Kline (2011) advocates accepting the simpler
model so long as its fit does not appreciably differ from the more complex model.

The Accounting Review


Volume 93, Number 4, 2018
When Do Auditors Use Specialists’ Work to Improve Problem Representations of and Judgments about Complex Estimates? 189

TABLE 5
Auditors’ Assessments of Likelihood that Estimate is Fairly Stated in Experiment 1a

Panel A: Descriptive Statistics for Likelihood Assessments: Mean (SE) [n]


No Relational
Relational Cue Cue
Lower credibility 5.56 4.44 4.99
(0.375) (0.390) (0.279)
[26] [27] [53]
Higher credibility 6.03 6.36 6.19
(0.371) (0.400) (0.271)
[26] [25] [51]
5.80 5.37
(0.263) (0.307)
[52] [52]

Panel B: Two-Way ANOVA


Two-Tailed
Source of Variation df MS F p-value
Relational cue 1 4.03 1.05 0.308
Source credibility 1 36.94 9.64 0.002
Relational cue  Source credibility 1 13.59 3.55 0.063
Error 100 3.83

Panel C: Planned Contrasts


Planned Contrast F1,100 p-value
H2: Relational cue/lower credibility condition , all other conditions* 12.36 ,0.001
Pairwise Contrasts:
Relational cue vs. no relational cue in lower credibility condition* 4.31 0.020
Relational cue vs. no relational cue in higher credibility condition 0.36 0.550
Lower credibility vs. higher credibility in relational cue condition* 12.43 ,0.001
Lower credibility vs. higher credibility in no relational cue condition 2.12 0.149
Relational cue/lower credibility condition vs. no relational cue/higher 8.70 0.002
credibility condition*
* The one-tailed p-value is reported for this directional prediction.
See definitions of Relational Cue and Source Credibility in Table 4. Likelihood Assessment is the response to the question ‘‘How likely is it that the
client’s fair value is fairly stated?’’ on an 11-point Likert scale anchored by 0, not at all likely, and 10, extremely likely.

The path coefficients shown in Figure 3 provide empirical support for the theorized process. The effect of a relational cue
on problem representations (Link 1) and judgments (Link 5) depends on source credibility (v22 ¼ 5.33, p ¼ 0.035), consistent
with the hypotheses. The remaining coefficients are also consistent with the predicted process. Problem representations
positively influence issue identification (Link 2, p ¼ 0.042), which negatively influence judgments (Link 3, p ¼ 0.006), which
negatively influence adjustment recommendations (Link 4, p , 0.001). Links 1–4 support the theorized process, while the
additional significant paths (Links 5 and 7) suggest partial mediation. These results confirm the overall process underlying the
hypotheses and suggest that improving auditors’ problem representations is a crucial first step in improving their judgments and
decisions about estimates.
Experiment 1 provides strong support for the hypotheses, but it has an important limitation: one of the two independent
variables is measured, rather than manipulated. This weakens the causal inferences that can be made because an unmeasured
factor correlated with the measure of source credibility could drive the results (Peecher and Solomon 2001).

The Accounting Review


Volume 93, Number 4, 2018
190 Griffith

FIGURE 2
Theoretical Model of Mediators Affecting Auditor Judgments and Decisions

This figure shows the theorized serial mediation model of how a relational cue affects auditors’ problem representations, identification of valid issues,
judgments of reasonableness, and decisions about estimates under varying risk conditions. The links denoted with an asterisk (*) depend on risk indicator
condition. Full mediation via the predicted process would occur via the paths shown with solid lines and labeled in bold (Links 1–4). I explicitly predict
these paths to be significant, and in the case of Link 1, to vary by risk indicator condition. While I do not explicitly predict the paths shown by dotted lines,
those paths may exist and are also consistent with the theorized process so long as the predicted paths remain significant in the presence of the additional
paths (i.e., indicating partial, rather than full, mediation).

IV. EXPERIMENT 2
To overcome Experiment 1’s limitation, in Experiment 2, I manipulate relational cue and engagement risk. I randomly
assign participants to all conditions to allow for stronger causal inferences. The alternative operational construct of engagement
risk also enhances the generalizability of the theory to a different risk indicator than client source credibility while conceptually
replicating Experiment 1. Moreover, unlike client source credibility, engagement risk does not directly influence the
preparation of the estimate. I use the case and procedures from Experiment 1 with the changes described below.
In Experiment 2, 165 experienced senior auditors from a Big 4 and a national firm participated during firm-sponsored
training or online.34 Participants average 3.6 years of experience and 2.0 audits with discounted cash flow models. Table 1
reports additional demographic information indicating that the participants are appropriate for the study.

Independent Variables
I manipulate the relational cue as present or absent, as in Experiment 1. I also manipulate engagement risk between
participants. Engagement risk is the risk of harm to the audit firm as a result of the audit engagement (Hackenbrack and Nelson
1996), and it encompasses audit risk and client and auditor business risks (Colbert et al. 1996). While engagement risk

34
I observe no differences in results in Experiment 2 due to administration in-person versus online.

The Accounting Review


Volume 93, Number 4, 2018
When Do Auditors Use Specialists’ Work to Improve Problem Representations of and Judgments about Complex Estimates? 191

FIGURE 3
Empirical Model of Mediators Affecting Auditor Judgments and Decisions in Experiment 1

This figure shows the results of the path analysis testing the serial mediation from relational cue and source credibility to adjustment recommendation
through problem representation, identification of valid issues, and assessed likelihood that the estimate is fairly stated. For parsimony, the model excludes
the non-predicted paths from Figure 1 that are not significant (Links 6, 8, and 9) because they do not impact the theoretical inferences, yet reduce model fit.
The model fits the data well (v132 ¼ 7.85, two-tailed p ¼ 0.853, Comparative Fit Index ¼ 1.00, Root Mean Square Error of Approximation ¼ 0.00). All p-
values are one-tailed, reflecting directional predictions. The model above allows Links 1 and 5 to vary across source credibility condition because nested-
models comparisons revealed significantly better model fit when the model allows these links to vary (v22 ¼ 5.33, p ¼ 0.035), indicating a significant
interaction of relational cue and source credibility.

increases the consequences of failing to identify a misstatement and can increase the ex ante risk of misstatement, it does not
diagnose specific problems within the estimate. I use Hackenbrack and Nelson’s (1996) manipulation of engagement risk.
Participants randomly assigned to the lower (higher) engagement risk condition read background information describing the
client as facing conditions suggesting less (more) severe consequences should the audit firm issue materially misstated financial
statements prior to reading the case information and answering case questions. Appendix B contains the complete
manipulations.

Dependent Variables
Measures Used in Tests of Hypotheses
To test H1, I measured the richness of auditors’ problem representations through free recalls, as in Experiment 1. For all
coding for Experiment 2, a doctoral student with auditing experience and I followed the coding procedures described for
Experiment 1.35 To test H2, I measured auditors’ judgments about the estimate, as in Experiment 1.

35
For problem representation coding, inter-rater agreement was 93 percent and Cohen’s kappa was 0.68 (p , 0.001). For issue identification coding,
inter-rater agreement was 95 percent and Cohen’s kappa was 0.85 (p , 0.001).

The Accounting Review


Volume 93, Number 4, 2018
192 Griffith

Other Dependent Measures


As in Experiment 1, I collected additional dependent measures to provide further insight into how problem representations
influence issue identification and judgments, and how judgments influence the actions auditors take next. To increase the
generalizability of the Experiment 1 results, I used different questions to elicit these measures in Experiment 2.
I measured auditors’ identification of potential issues by asking participants to write the reasons for their conclusion about
the fair value and/or the issues they would discuss with their manager. As an additional process measure, participants
completed the Personal Involvement Inventory, which measures individuals’ involvement with judgment targets (Zaichkowsky
1985), because personal involvement can increase motivation to elaborate (Chaiken 1980; Chaiken and Maheswaran 1994;
Petty and Cacioppo 1984).
I also measured auditors’ decisions about what action they would take next. Participants answered the question ‘‘What
would you like to do at this point?’’ by choosing from four options: conclude the fair value is reasonable; continue under the
assumption that the fair value is reasonable, but delay forming a conclusion until talking to the manager the next time he or she
is on site; do not make a conclusion, but call the manager immediately to discuss issues that may indicate that the fair value is
not reasonable; or conclude that the fair value is materially overstated (e.g., Kadous, Leiby, and Peecher 2013; Griffith et al.
2015b). Participants gave reasons for their conclusion and/or the issues they would discuss with the manager, along with this
choice. The four options reflect increasing urgency. I use this measure to examine the links among auditors’ problem
representations, judgments, and decisions.

Experiment 2 Results
Manipulation Checks
I use the same relational cue manipulation check as in Experiment 1. A significantly greater proportion of participants in
the relational cue condition chose ‘‘aggressive’’ at least once as compared to the no relational cue condition (68 percent versus
34 percent, v12 ¼ 19.71, p , 0.001). To test the engagement risk manipulation, participants rated the level of engagement risk
on an 11-point Likert scale anchored by 0, very low risk, and 10, very high risk. Participants rated engagement risk significantly
higher in the higher versus lower engagement risk condition (M ¼ 5.91 versus 4.91, F1,162 ¼ 11.52, p , 0.001). These analyses
indicate successful manipulations.

Tests of Hypotheses about the Effects of a Relational Cue and Engagement Risk
H1: Problem Representations
Table 6 reports descriptive statistics, an ANOVA with relational cue and engagement risk as independent variables and
the richness of problem representations as the dependent variable, and the planned contrast that tests H1.36 Figure 1
illustrates the results graphically for both hypotheses. Consistent with the hypothesized pattern, auditors in the relational cue/
higher engagement risk condition include more rich items in their problem representations (M ¼ 0.25) than auditors in all
other conditions (M ¼ 0.10–0.13).37 The planned contrast testing H1 in Panel C confirms that auditors in the relational cue/
higher engagement risk condition develop significantly richer problem representations than auditors in other conditions (p ¼
0.015). The residual between-cells variation is insignificant (F2,161 ¼ 0.107, two-tailed p ¼ 0.899), indicating that the
hypothesized contrast explains the data well. The auditors’ problem representations support H1 and corroborate Experiment
1’s H1 results.38

H2: Auditor Judgments about the Estimate


Table 7 reports descriptive statistics, an ANOVA with relational cue and engagement risk as independent variables and
auditors’ assessments of how likely the estimate is to be reasonable as a dependent variable, and the planned contrast that

36
Potential covariates include firm, experimental session, and various measures of experience, knowledge, and effort. Of the potential covariates that are
correlated with the dependent variables, none are significant in the models testing H1 or H2 as main effects or in all possible interactions with
independent variables except for effort measures, which are significant as main effects only in the model testing H1. Including these covariates in the
model testing H1 yields slightly stronger results, but inferences are unchanged. As a result of these analyses, I do not include covariates in the
Experiment 2 analyses.
37
Pairwise contrast tests reported in Table 6, Panel C indicate that these differences are marginally significant at p , 0.10 (one comparison) or significant
at p , 0.05 (two comparisons).
38
I repeat these analyses using total items included in problem representations; as expected, the planned contrast is not significant (F1,161 ¼ 0.75, two-
tailed p ¼ 0.388). This provides further evidence that the relational cue in combination with an indicator of higher risk improves the relational structure
in auditors’ problem representations, rather than merely increasing the quantity of elements included.

The Accounting Review


Volume 93, Number 4, 2018
When Do Auditors Use Specialists’ Work to Improve Problem Representations of and Judgments about Complex Estimates? 193

TABLE 6
Richness of Auditors’ Problem Representations in Experiment 2a

Panel A: Descriptive Statistics for Richness of Problem Representations: Mean (SE) [n]
No
Relational Cue Relational Cue
Higher engagement risk 0.12 0.25 0.19
(0.045) (0.056) (0.037)
[40] [41] [81]
Lower engagement risk 0.10 0.13 0.11
(0.049) (0.045) (0.030)
[43] [41] [84]
0.11 0.19
(0.030) (0.036)
[83] [82]

Panel B: Two-Way ANOVA


Two-Tailed
Source of Variation df MS F p-value
Relational cue 1 0.24 2.35 0.127
Engagement risk 1 0.19 1.81 0.180
Relational cue  Engagement risk 1 0.09 0.85 0.359
Error 161

Panel C: Planned Contrasts


Planned Contrast F1,161 p-value
H1: Relational cue/higher engagement risk condition . all other conditions* 4.79 0.015
Pairwise Contrasts:
Relational cue vs. no relational cue in higher engagement risk condition* 2.96 0.044
Relational cue vs. no relational cue in lower engagement risk condition 0.19 0.662
Higher engagement risk vs. lower engagement in relational cue condition* 2.55 0.056
Higher engagement risk vs. lower engagement risk in no relational cue condition 0.09 0.764
Relational cue/higher engagement risk condition vs. no relational cue/lower 4.22 0.021
engagement risk condition*
* The one-tailed p-value is reported for this directional prediction.
a
See definitions of Relational Cue and Richness of Problem Representation in Table 4. Engagement Risk is manipulated at two levels: lower and higher.

tests H2. Consistent with the hypothesized pattern, auditors in the relational cue/higher engagement risk condition assess
the estimate as significantly less reasonable (M ¼ 5.54) than auditors in all other conditions (M ¼ 6.42–6.69).39 The
planned contrast testing H2 in Panel C confirms that auditors assess the estimate as significantly less likely to be reasonable
in the relational cue/higher engagement risk condition than in other conditions (p ¼ 0.002). The residual between-cells
variation is insignificant (F2,161 ¼ 0.246, two-tailed p ¼ 0.782), indicating that the hypothesized contrast explains the data
well. The auditors’ assessments of the likelihood that the estimate is reasonable support H2 and corroborate Experiment 1’s
H2 results.

39
Pairwise contrast tests reported in Table 7, Panel C indicate that these differences are significant at p , 0.05 (one comparison) or p , 0.01 (two
comparisons).

The Accounting Review


Volume 93, Number 4, 2018
194 Griffith

TABLE 7
Auditors’ Assessments of Likelihood that Estimate is Fairly Stated in Experiment 2a

Panel A: Descriptive Statistics for Likelihood Assessment: Mean (SE) [n]


No Relational
Relational Cue Cue
Higher engagement risk 6.69 5.54 6.11
(0.365) (0.305) (0.223)
[40] [41] [81]
Lower engagement risk 6.42 6.66 6.54
(0.269) (0.289) (0.196)
[43] [41] [84]
6.55 6.10
(0.223) (0.218)
[83] [82]

Panel B: Two-Way ANOVA


Two-Tailed
Source of Variation df MS F p-value
Relational cue 1 8.55 2.20 0.140
Engagement risk 1 7.50 1.93 0.167
Relational cue  Engagement risk 1 19.94 5.12 0.025
Error 161 3.90

Panel C: Planned Contrasts


Planned Contrast F1,161 p-value
H2: Relational cue/higher engagement risk condition , all other conditions* 8.75 0.002
Pairwise Contrasts:
Relational cue vs. no relational cue in higher engagement risk condition* 6.88 0.005
Relational cue vs. no relational cue in lower engagement risk condition 0.31 0.578
Higher engagement risk vs. lower engagement in relational cue condition* 6.62 0.006
Higher engagement risk vs. lower engagement risk in no relational cue condition 0.39 0.536
Relational cue/higher engagement risk condition vs. no relational cue/lower 4.19 0.021
engagement risk condition*
* The one-tailed p-value is reported for this directional prediction.
a
See definitions of Relational Cue in Table 4, Likelihood Assessment in Table 5, and Engagement Risk in Table 6.

Test of Theorized Model


I estimate the structural equations model in Figure 4 to test the process by which the interaction of a relational cue and
engagement risk affects auditors’ judgments and decisions about the estimate.40 I use a nested model comparison to test
whether the effect of a relational cue differs between the lower and higher engagement risk conditions (Rigdon et al. 1998). The
model fits the data well.41 The model includes only the significant paths from all theory-consistent paths shown in Figure 2
because the non-significant paths are not critical to the theory and do not improve model fit (Kline 2011).42

40
I do not include auditors’ scores on the Personal Involvement Inventory in this model because the independent variables did not significantly affect
involvement alone or in interaction (all two-tailed p . 0.14), and pairwise contrast tests find that auditors in the relational cue/higher engagement risk
condition are not more involved than auditors in the other conditions (all two-tailed p . 0.22), indicating that this construct would not increase the
model’s explanatory power.
41
The traditional v2 test shows good fit (v102 ¼ 6.31, two-tailed p ¼ 0.789). Both the RMSEA of 0.00 and the CFI of 1.00 are well within the ranges
indicating good fit (Byrne 2010; Hu and Bentler 1999).
42
As in Experiment 1, I first estimated the full model shown in Figure 2. While model fit is good (v72 ¼ 6.59, two-tailed p ¼ 0.472, RMSEA ¼ 0.00, and
CFI ¼ 1.00), two non-critical paths (Links 5 and 8) are not significant.

The Accounting Review


Volume 93, Number 4, 2018
When Do Auditors Use Specialists’ Work to Improve Problem Representations of and Judgments about Complex Estimates? 195

FIGURE 4
Empirical Model of Mediators Affecting Auditor Judgments and Decisions in Experiment 2

This figure shows the results of the path analysis testing the mediation from relational cue to action through problem representation, identification of valid
issues, and assessed likelihood that the estimate is fairly stated. For parsimony, the model excludes the non-predicted paths from Figure 1 that are not
significant (Links 5 and 8) because they do not impact the theoretical inferences, yet reduce model fit. The model fits the data well (v102 ¼ 6.31, two-tailed
p ¼ 0.789, Comparative Fit Index ¼ 1.00, Root Mean Square Error of Approximation ¼ 0.00). All p-values are one-tailed, reflecting directional predictions.
The model above allows Links 1, 3, and 6 to vary across engagement risk condition because nested-models comparisons revealed significantly better
model fit when the model allows these links to vary (v32 ¼ 8.36, p ¼ 0.020). The significant Chi-square difference test indicates a significant interaction of
relational cue and engagement risk.

The path coefficients shown in Figure 4 provide empirical support for the theorized process. Preliminary analyses indicate
that three paths vary across engagement risk conditions: Links 1, 3, and 6. As expected, the path from relational cue to problem
representations (Link 1) is insignificant for the lower engagement risk condition (p ¼ 0.311), but is positive and significant for
the higher engagement risk condition (p ¼ 0.035). In addition, the path from relational cue to issue identification (Link 6) is
insignificant for the lower engagement risk condition (p ¼ 0.456), but is positive and significant for the higher engagement risk
condition (p ¼ 0.024). Finally, while model fit improves significantly when the path from issue identification to judgments
(Link 3) is allowed to vary across engagement risk conditions (v12 ¼ 5.09, two-tailed p ¼ 0.024), the path coefficient is negative
and significant in both engagement risk conditions (p ¼ 0.016 for lower risk, p , 0.001 for higher risk). This indicates that the
strength, but not the direction, of the relation from issue identification to judgments depends on engagement risk. While not
explicitly predicted, the Link 3 and 6 results are consistent with the theory. Overall, the difference in the effect of engagement
risk across relational cue conditions is significant (v32 ¼ 8.36, p ¼ 0.020). These results indicate that the direct effect of a
relational cue on problem representations and the indirect effect of a relational cue on judgments depend on engagement risk,
consistent with the hypotheses.
The remaining coefficients are also consistent with the predicted process. Problem representations positively influence
issue identification (Link 2, p , 0.001), which negatively influences judgments (Link 3, all p , 0.02), which negatively
influence actions (Link 4, p , 0.001), indicating that auditors who assess the estimate as less reasonable decide to act more

The Accounting Review


Volume 93, Number 4, 2018
196 Griffith

urgently. Links 1–4 support the theorized process, while the additional significant paths (Links 6, 7, and 9) suggest partial
mediation.43 These results are theoretically consistent with the results of Experiment 1, provide causal evidence supporting the
overall process underlying the hypotheses, and extend the generalizability of the theorized effect of risk indicators to include
engagement risk in addition to source credibility.

V. DISCUSSION AND CONCLUSION


Auditors, especially non-experts, struggle to identify misstatements indicated by problematic patterns among assumptions
underlying estimates when those assumptions appear reasonable individually. This can lead to lower-quality audits of estimates.
The nature of auditors’ trouble identifying these misstatements suggests that one cause is inadequate problem representations. In
this study, I experimentally examine the joint effect of a relational cue from a specialist and situational factors indicating higher
risk of misstatement on auditors’ problem representations and evaluation of estimates. Auditors reviewed the work done by the
audit team and specialist to test the key assumptions in an estimate and made judgments about the estimate.
I find that auditors benefit from a relational cue when a situational factor indicates higher, but not lower, risk. Two
experiments show that this result is robust to two different risk indicators: client source credibility and engagement risk.
Auditors who receive a relational cue and an indicator of higher risk develop richer problem representations that lead them to
identify more potential issues in an estimate, which, in turn, leads them to judge a biased estimate as less reasonable than
auditors who receive an indicator of lower risk. These judgments influence auditors’ subsequent decisions about whether to
suggest adjusting the estimate and how urgently to communicate with their managers. The theory supported by these results
suggests that the combination of a relational cue and an indicator of higher risk causes auditors to develop richer problem
representations that improve their judgments and decisions about estimates.
This study contributes to audit research and practice. First, it provides insight into the conditions under which auditors
benefit from relational cues in their specialists’ work. I find that a relational cue helps auditors develop problem representations
that relate the evidence and assumptions to each other and to the estimate in a way that helps auditors recognize problematic
patterns when an indicator of higher risk motivates them to elaborate on the available evidence. Thus, a relational cue in
specialists’ work appears to be more beneficial in combination with a situational factor that increases auditors’ elaboration on
the cue.
Firms might choose to intervene in situations where auditors use specialists and situational factors suggest that auditors
will not approach specialists’ work with heightened motivation to scrutinize the details. Similar to higher source credibility and
lower engagement risk, factors such as a long and positive history with the client or auditors’ positive impressions of a client’s
technical expertise are unlikely to motivate auditors to scrutinize specialists’ work, because these conditions do not increase
expectations of problems in estimates or heighten the adverse consequences of failing to find problems. In these situations,
firms could enhance auditors’ accountability, prompt goals for understanding how specialists reached their conclusions,
increase the salience of intrinsic motivation (Kadous and Zhou 2016), or otherwise intervene to increase auditors’ motivation to
elaborate on specialists’ work.
Auditors also frequently use the work of internal auditors, external auditors from other firms, and other audit team
members. Future research can explore whether and to what extent auditors benefit from increased motivation when using the
work of others closer in expertise to themselves than specialists, because understanding other auditors’ work likely requires less
effortful elaboration. Alternatively, auditors might be less motivated to scrutinize specialists’ work than other auditors’ work
out of deference to specialists’ perceived authority conferred by their domain-specific expertise (e.g., Jenkins, Negangard, and
Oler 2016). This possibility might partially explain subordinate auditors’ deference to supervisors’ views (Libby and Trotman
1993; Wilks 2002).
Second, this study shows that improving auditors’ problem representations can improve audits of estimates when the
assumptions collectively suggest a problem, but do not appear problematic individually. Future research can explore factors and
interventions other than relational cues that might improve problem representations by targeting a specific deficiency within
problem representations (i.e., relational structure). Firms might target deficient relational structures in auditors’ problem
representations in a number of ways. Firms might use training on valuation methods, as auditors who specialize in an area form
richer problem representations in that area (Hammersley 2006). Firms might develop decision aids targeting auditors’
understanding of the relationships among assumptions and between assumptions and the final estimate.44 Firms might also

43
The non-critical paths that are significant differ between the two experiments. However, overall results are consistent with the theorized causal process
in both experiments because Links 1–4 are significant in the predicted directions, and the differences between Figures 3 and 4 do not suggest different
causal processes.
44
This type of decision aid differs from the checklists commonly used to aggregate misstatements because it does not require that auditors identify
misstatements, or even problems that could signal possible misstatements, as a prerequisite for its use.

The Accounting Review


Volume 93, Number 4, 2018
When Do Auditors Use Specialists’ Work to Improve Problem Representations of and Judgments about Complex Estimates? 197

adapt training approaches for analytical procedures that teach auditors to consider a client’s ratios in total to evaluate
relationships among those ratios, which helps develop pattern recognition skills similar to those needed when auditing
estimates.
Finally, other types of domain-specific expertise, such as systems thinking and business modeling (e.g., Peecher, Schwartz,
and Solomon 2007; Brewster 2011), likely enhance problem representations of other estimates (e.g., loan loss reserves,
warranty reserves, real estate valuations) based on different assumptions and models. Future research might identify and
address deficiencies in other types of domain-specific expertise relevant to complex estimates.
One promising route for improving audits of estimates involves changing auditors’ mindsets (Griffith et al. 2015b).
Auditors, who tend to default to an implemental mindset, show improved judgments about estimates when in a deliberative
mindset that fosters broader consideration of evidence (Griffith et al. 2015b). My study examines a different mechanism by
which auditors’ cognition can be influenced to improve their judgments about estimates—by increasing their motivation to
elaborate on the evidence. The absence of any manipulations or task features likely to instill a deliberative mindset (e.g.,
Griffith et al. 2015a; Nolder and Kadous 2017) suggests that auditors likely approached the task in an implemental mindset.45
My study shows that situational factors indicating higher risk can prompt cognition that helps auditors identify problems within
estimates, even without targeting auditors’ mindsets. While a deliberative mindset would likely help auditors develop richer
problem representations by fostering broader consideration of evidence, my study suggests that auditors can develop rich
problem representations in their ‘‘default’’ mindset provided that they have sufficient epistemic motivation. Future research can
examine the incremental benefit of a deliberative mindset to auditors’ judgments about estimates under higher risk conditions,
and the possibility that a deliberative mindset increases auditors’ use of specialists’ cues under lower risk conditions.
Regardless of the approach used to improve auditors’ problem representations, it is important that auditors can diagnose
problems within estimates that cause misstatements. Auditors struggle to persuade clients to adjust estimates when they
disagree with the client about assumptions or other subjective components of estimates (Griffith 2016). The focus of this study
is a first step in this process: helping auditors recognize when an adjustment is warranted. Recent research finds that auditors
more often propose adjustments when they use specialists (Cannon and Bedard 2017), perhaps because specialists help them
identify problems under certain conditions. Future research might explore factors that embolden auditors to propose
adjustments to estimates, and factors that help auditors persuade clients to book these adjustments.
Design choices in this study also suggest avenues for future research. First, auditing standards require auditors to identify
significant risks, and complex estimates often fall into this category due to their high level of estimation uncertainty (PCAOB
2010a; Cannon and Bedard 2017). A significant risk classification should typically increase auditors’ epistemic motivation
when evaluating a complex estimate. Yet, the auditors in my study benefitted from other risk indicators despite the frequency
with which estimates are considered significant risks in practice. I did not remind auditors of significant risks when they began
the experiment; future research can examine whether explicitly reminding auditors of the significant risk inherent to many
complex estimates motivates them to more effectively consider the available evidence. Second, the audit team’s specialist
provides the relational cue. Future research can examine whether the source of a relational cue (e.g., a superior or subordinate
team member, the client, or an objective external source, such as industry reports) influences the way auditors use it. Auditors’
attitudes about specialists’ work, which I do not measure, might also be affected by situational factors. These attitudes likely
have different effects on auditors’ use of cues from specialists versus from other sources. Finally, this study only examines
senior auditors’ preliminary conclusions. Participants are not asked to propose adjustments, nor do they have the information to
do so. Future research can explore how the factors examined in this study affect auditors’ determination of adjustment amounts,
as well as how they interact with manager or partner review to influence final audit judgments. Nonetheless, this study provides
initial evidence about relational cues, a potentially useful tool to improve audits of estimates, and how they interact with risk
indicators, an important situational feature in auditing estimates, that future research can build on to ultimately improve audits
of estimates.

REFERENCES
Anderson, U., K. Kadous, and L. Koonce. 2004. The role of incentives to manage earnings and quantification in auditors’ evaluations of
management-provided information. Auditing: A Journal of Practice & Theory 23 (1): 11–27. https://doi.org/10.2308/aud.2004.23.
1.11
Bamber, E. M. 1983. Expert judgment in the audit team: A source reliability approach. Journal of Accounting Research 21 (2): 396–412.
https://doi.org/10.2307/2490781

45
See Nolder and Kadous (2017), Tegeler (2017), and Ikuta-Mendoza, Majors, and Winn (2016) for examples of features likely to influence mindsets.

The Accounting Review


Volume 93, Number 4, 2018
198 Griffith

Barr-Pulliam, D., J. R. Joe, S. A. Mason, and K. Sanderson. 2017. The Effects of Competition and Lack of a Professional Identity on the
Market for High Quality Valuation Service Providers. Working paper, University of Wisconsin–Madison, University of Delaware,
DePaul University, and Bentley University.
Bedard, J. C., and S. F. Biggs. 1991a. The effect of domain-specific experience on evaluation of management representations in analytical
procedures. Auditing: A Journal of Practice & Theory 10 (Supplement): 77–90.
Bedard, J. C., and S. F. Biggs. 1991b. Pattern recognition, hypotheses generation, and auditor performance in an analytical task. The
Accounting Review 66 (3): 622–642.
Berthon, P., M. Ewing, and L. L. Hah. 2005. Captivating company: Dimensions of attractiveness in employer branding. International
Journal of Advertising 24 (2): 151–172. https://doi.org/10.1080/02650487.2005.11072912
Bierstaker, J. L., J. C. Bedard, and S. F. Biggs. 1999. The role of problem representation shifts in auditor decision processes in analytical
procedures. Auditing: A Journal of Practice & Theory 18 (1): 18–36. https://doi.org/10.2308/aud.1999.18.1.18
Birnbaum, M. H., and S. E. Stegner. 1979. Source credibility in social judgment: Bias, expertise, and the judge’s point of view. Journal of
Personality and Social Psychology 37 (1): 48–74. https://doi.org/10.1037/0022-3514.37.1.48
Bohner, G., and N. Dickel. 2011. Attitudes and attitude change. Annual Review of Psychology 62 (1): 391–417. https://doi.org/10.1146/
annurev.psych.121208.131609
Boritz, J. E., N. Kochetova-Kozloski, L. A. Robinson, and C. Wong. 2017. Auditors’ and Specialists’ Views about the Use of Specialists
During an Audit. Working paper, University of Waterloo and Saint Mary’s University.
Brewster, B. E. 2011. How a systems perspective improves knowledge acquisition and performance in analytical procedures. The
Accounting Review 86 (3): 915–943. https://doi.org/10.2308/accr.00000040
Brown, C. E., and I. Solomon. 1990. Auditor configural information processing in control risk assessment. Auditing: A Journal of
Practice & Theory 9 (3): 17–38.
Brown, C. E., and I. Solomon. 1991. Configural information processing in auditing: The role of domain-specific knowledge. The
Accounting Review 66 (1): 100–119.
Buckless, F. A., and S. P. Ravenscroft. 1990. Contrast coding: A refinement of ANOVA in behavioral analysis. The Accounting Review
65 (4): 933–945.
Byrne, B. M. 2010. Structural Equation Modeling with AMOS: Basic Concepts, Applications, and Programming. 2nd edition. New York,
NY: Taylor & Francis Group.
Cannon, N. H., and J. C. Bedard. 2017. Auditing challenging fair value measurements: Evidence from the field. The Accounting Review
92 (4): 81–114.
Chaiken, S. 1980. Heuristic versus systematic information processing and the use of source versus message cues in persuasion. Journal of
Personality and Social Psychology 39 (5): 752–766. https://doi.org/10.1037/0022-3514.39.5.752
Chaiken, S., and D. Maheswaran. 1994. Heuristic processing can bias systematic processing: Effects of source credibility, argument
ambiguity, and task importance on attitude judgment. Journal of Personality and Social Psychology 66 (3): 460–473. https://doi.
org/10.1037/0022-3514.66.3.460
Chi, M. T. H., R. Glaser, and E. Rees. 1982. Expertise in problem solving. In Advances in the Psychology of Human Intelligence. Volume
1, edited by Sternberg, R. J., 7–75. Hillsdale, NJ: Erlbaum.
Christ, M. Y. 1993. Evidence on the nature of audit planning problem representations: An examination of auditor free recalls. The
Accounting Review 68 (2): 304–322.
Colbert, J. L., M. S. Luehlfing, and C. W. Alderman. 1996. Engagement risk. CPA Journal 66 (3). Available at: http://archives.cpajournal.
com/1996/mar96/depts/auditing.htm
Crano, W. D., and R. Prislin. 2006. Attitudes and persuasion. Annual Review of Psychology 57 (1): 345–374. https://doi.org/10.1146/
annurev.psych.57.102904.190034
Di Sessa, A. A. 2014. Phenomenology and the evolution of intuition. In Mental Models, edited by Gentner, D., and A. L. Stevens, 15–34.
New York, NY: Psychology Press.
Glover, S. M., M. H. Taylor, and Y. Wu. 2017a. Current practices and challenges in auditing fair value measurements and complex
estimates: Implications for auditing standards and the academy. Auditing: A Journal of Practice & Theory 36 (1): 63–84. https://
doi.org/10.2308/ajpt-51514
Glover, S. M., M. H. Taylor, and Y. Wu. 2017b. Mind the Gap: Why Do Experts Have Differences of Opinion Regarding the Sufficiency
of Audit Evidence Supporting Complex Fair Value Measurements? Working paper, Brigham Young University, Case Western
Reserve University, and Texas Tech University.
Goodwin, J. 1999. The effects of source integrity and consistency of evidence on auditors’ judgments. Auditing: A Journal of Practice &
Theory 18 (2): 1–16. https://doi.org/10.2308/aud.1999.18.2.1
Goodwin, J., and K. T. Trotman. 1996. Factors affecting the audit of revalued non-current assets: Initial public offerings and source
reliability. Accounting and Finance 36 (2): 151–170. https://doi.org/10.1111/j.1467-629X.1996.tb00304.x
Greeno, J. G. 1989. Situations, mental models, and generative knowledge. In Complex Information Processing: The Impact of Herbert A.
Simon, edited by Klahr, D., and K. Kotovsky, 285–318. Hillsdale, NJ: Lawrence Erlbaum Associates.
Griffith, E. E. 2016. How Do Auditors Use Valuation Specialists When Auditing Fair Values? Working paper, University of Wisconsin–
Madison.

The Accounting Review


Volume 93, Number 4, 2018
When Do Auditors Use Specialists’ Work to Improve Problem Representations of and Judgments about Complex Estimates? 199

Griffith, E. E., J. S. Hammersley, and K. Kadous. 2015a. Audits of complex estimates as verification of management numbers: How
institutional pressures shape practice. Contemporary Accounting Research 32 (3): 833–863. https://doi.org/10.1111/1911-3846.
12104
Griffith, E. E., J. S. Hammersley, K. Kadous, and D. Young. 2015b. Auditor mindsets and audits of complex estimates. Journal of
Accounting Research 53 (1): 49–77. https://doi.org/10.1111/1475-679X.12066
Guggenmos, R. D., M. D. Piercey, and C. P. Agoglia. 2017. Making Sense of Custom Contrast Analysis: Seven Takeaways and a New
Approach. Working paper, Cornell University and University of Massachusetts Amherst.
Hackenbrack, K., and M. W. Nelson. 1996. Auditors’ incentives and their application of financial accounting standards. The Accounting
Review 71 (1): 43–59.
Hammersley, J. S. 2006. Pattern identification and industry-specialist auditors. The Accounting Review 81 (2): 309–336. https://doi.org/
10.2308/accr.2006.81.2.309
Hammersley, J. S. 2011. A review and model of auditor judgments in fraud-related planning tasks. Auditing: A Journal of Practice &
Theory 30 (4): 101–128. https://doi.org/10.2308/ajpt-10145
Hammersley, J. S., K. M. Johnstone, and K. Kadous. 2011. How do audit seniors respond to heightened fraud risk? Auditing: A Journal of
Practice & Theory 30 (3): 81–101. https://doi.org/10.2308/ajpt-10110
Hayes, A. F. 2013. Introduction to Mediation, Moderation, and Conditional Process Analysis: A Regression-Based Approach. New York,
NY: The Guildford Press.
Hirst, D. E. 1994. Auditors’ sensitivity to source reliability. Journal of Accounting Research 32 (1): 113–126. https://doi.org/10.2307/
2491390
Hoffman, V. B., and M. F. Zimbelman. 2009. Do strategic reasoning and brainstorming help auditors change their standard audit
procedures in response to fraud risk? The Accounting Review 84 (3): 811–837. https://doi.org/10.2308/accr.2009.84.3.811
Hu, L., and P. M. Bentler. 1999. Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new
alternatives. Structural Equation Modeling 6 (1): 1–55. https://doi.org/10.1080/10705519909540118
Ikuta-Mendoza, K., T. M. Majors, and A. Winn. 2016. Can the Nature of Auditing Standards Shape Auditor Mindset? Evidence from a
Setting in which a ‘‘Just Do It’’ Mindset Improves Auditor Performance. Working paper, University of Washington, University of
Southern California, and University of Illinois at Urbana–Champaign.
Jenkins, J. G., E. Negangard, and M. J. Oler. 2016. Contemporary Use of Forensic Professionals in the Audit Process: Evidence from the
Field. Working paper, Virginia Polytechnic Institute and State University.
Joe, J. R., S. D. Vandervelde, and Y. Wu. 2017a. Use of high quantification evidence in fair value audits: Do auditors stay in their comfort
zone? The Accounting Review 92 (5): 89–116. https://doi.org/10.2308/accr-51662
Joe, J. R., Y. Wu, and A. B. Zimmerman. 2017b. Overcoming Communication Challenges: Can Taking the Specialist’s Perspective
Improve Auditors’ Critical Evaluation and Integration of the Specialist’s Work? Working paper, University of Delaware, Texas
Tech University, and Northern Illinois University.
Joseph, W. B. 1982. The credibility of physically attractive communicators: A review. Journal of Advertising 11 (3): 15–24. https://doi.
org/10.1080/00913367.1982.10672807
Joyce, E. J., and G. C. Biddle. 1981. Are auditors’ judgments sufficiently regressive? Journal of Accounting Research 19 (2): 323–349.
https://doi.org/10.2307/2490868
Kadous, K., and Y. D. Zhou. 2016. Motivating Auditor Skepticism. Working paper, Emory University.
Kadous, K., J. Leiby, and M. E. Peecher. 2013. How do auditors weight informal contrary advice? The joint influence of advisor social
bond and advice justifiability. The Accounting Review 88 (6): 2061–2087. https://doi.org/10.2308/accr-50529
Kline, R. B. 2011. Principles and Practice of Structural Equation Modeling. 3rd edition. New York, NY: Guilford Press.
Kruglanski, A. W. 1990. Lay epistemic theory in social-cognitive psychology. Psychological Inquiry 1 (3): 181–197. https://doi.org/10.
1207/s15327965pli0103_1
Libby, R., and K. T. Trotman. 1993. The review process as a control for differential recall of evidence in auditor judgments. Accounting,
Organizations and Society 18 (6): 559–574. https://doi.org/10.1016/0361-3682(93)90003-O
Lievens, F., C. Decaesteker, P. Coetsier, and J. Geirnaert. 2001. Organizational attractiveness for prospective applicants: A person-
organisation fit perspective. Applied Psychology 50 (1): 30–51. https://doi.org/10.1111/1464-0597.00047
Lundholm, R. J. 1999. Reporting on the past: A new approach to improving accounting today. Accounting Horizons 13 (4): 315–322.
https://doi.org/10.2308/acch.1999.13.4.315
Martin, R. D., J. S. Rich, and T. J. Wilks. 2006. Auditing fair value measurements: A synthesis of relevant research. Accounting Horizons
20 (3): 287–303. https://doi.org/10.2308/acch.2006.20.3.287
Nelson, M. W. 2009. A model and literature review of professional skepticism in auditing. Auditing: A Journal of Practice & Theory 28
(2): 1–34. https://doi.org/10.2308/aud.2009.28.2.1
Nolder, C. E., and K. Kadous. 2017. Grounding Measurement of Professional Skepticism in Mindset and Attitude Theory: A Way
Forward. Working paper, Suffolk University and Emory University.
Ohanian, R. 1990. Construction and validation of a scale to measure celebrity endorsers’ perceived expertise, trustworthiness, and
attractiveness. Journal of Advertising 19 (3): 39–52. https://doi.org/10.1080/00913367.1990.10673191

The Accounting Review


Volume 93, Number 4, 2018
200 Griffith

Peecher, M. E. 1996. The influence of auditors’ justification processes on their decisions: A cognitive model and experimental evidence.
Journal of Accounting Research 34 (1): 125–140. https://doi.org/10.2307/2491335
Peecher, M. E., and I. Solomon. 2001. Theory and experimentation in studies of audit judgments and decisions: Avoiding common
research traps. International Journal of Auditing 5 (3): 193–203. https://doi.org/10.1111/1099-1123.00335
Peecher, M. E., R. Schwartz, and I. Solomon. 2007. It’s all about audit quality: Perspectives on strategic-systems auditing. Accounting,
Organizations and Society 32 (4/5): 463–485. https://doi.org/10.1016/j.aos.2006.09.001
Peterson, R. A. 1994. A meta-analysis of Cronbach’s coefficient alpha. Journal of Consumer Research 21 (2): 381–391. https://doi.org/
10.1086/209405
Petty, R. E., and J. T. Cacioppo. 1984. Source factors and the elaboration likelihood model of persuasion. Advances in Consumer
Research (Association for Consumer Research, U.S.) 11 (1): 668–672.
Petty, R. E., and J. T. Cacioppo. 1986. The elaboration likelihood model of persuasion. Advances in Experimental Social Psychology 19:
123–205. https://doi.org/10.1016/S0065-2601(08)60214-2
Petty, R. E., D. T. Wegener, and L. R. Fabrigar. 1997. Attitudes and attitude change. Annual Review of Psychology 48 (1): 609–647.
https://doi.org/10.1146/annurev.psych.48.1.609
Pornpitakpan, C. 2004. The persuasiveness of source credibility: A critical review of five decades’ evidence. Journal of Applied
Psychology 34 (2): 243–281.
Public Company Accounting Oversight Board (PCAOB). 2003. Using the Work of a Specialist. PCAOB Interim Auditing Standards AU
Section 336. Washington, DC: PCAOB.
Public Company Accounting Oversight Board (PCAOB). 2009. Auditing Fair Value Measurements and Using the Work of a Specialist.
Standing Advisory Group Meeting (October 14–15). Washington, DC: PCAOB.
Public Company Accounting Oversight Board (PCAOB). 2010a. Auditing Standards Related to the Auditor’s Assessment of and
Response to Risk. PCAOB Release No. 2010-004. Washington, DC: PCAOB.
Public Company Accounting Oversight Board (PCAOB). 2010b. Report on Observations of PCAOB Inspectors Related to Audit Risk
Areas Affected by the Economic Crisis. PCAOB Release No. 2010-006. Washington, DC: PCAOB.
Public Company Accounting Oversight Board (PCAOB). 2011. Assessing and Responding to Risk in the Current Economic Environment.
Staff Audit Practice Alert No. 9 (December 6). Washington, DC: PCAOB.
Public Company Accounting Oversight Board (PCAOB). 2015. The Auditor’s Use of the Work of Specialists. Staff Consultation Paper
No. 2015-01 (May 28). Washington, DC: PCAOB.
Rich, J. S. 2004. Reviewers’ responses to expectations about the client and the preparer. The Accounting Review 79 (2): 497–517. https://
doi.org/10.2308/accr.2004.79.2.497
Rigdon, E. E., R. E. Schumacker, and W. Wothke. 1998. A comparative review of interaction and nonlinear modeling. In Interaction and
Nonlinear Effects in Structural Equation Modeling, edited by Schumacker, R. E., and G. A. Marcoulides, 1–16. Mahwah, NJ:
Erlbaum Associates.
Rouse, W. B., and N. M. Morris. 1986. On looking into the black box: Prospects and limits in the search for mental models. Psychological
Bulletin 100 (3): 349–363. https://doi.org/10.1037/0033-2909.100.3.349
Schultz, J. J., Jr., J. L. Bierstaker, and E. O’Donnell. 2010. Integrating business risk into auditor judgment about the risk of material
misstatement: The influence of a strategic-systems-audit approach. Accounting, Organizations and Society 35 (2): 238–251. https://
doi.org/10.1016/j.aos.2009.07.006
Simon, C. A. 2012. Individual auditors’ identification of relevant fraud schemes. Auditing: A Journal of Practice & Theory 31 (1): 1–16.
https://doi.org/10.2308/ajpt-10169
Slovic, P., and E. Peters. 2006. Risk perception and affect. Current Directions in Psychological Science 15 (6): 322–325. https://doi.org/
10.1111/j.1467-8721.2006.00461.x
Smith-Lacroix, J., S. Durocher, and Y. Gendron. 2012. The erosion of jurisdiction: Auditing in a market value accounting regime. Critical
Perspectives on Accounting 23 (1): 36–53. https://doi.org/10.1016/j.cpa.2011.09.002
Tegeler, A. 2017. The Influence of Inspection Focus on Auditor Judgments in Audits of Complex Estimates. Working paper, University of
Wisconsin–Milwaukee.
Wilks, T. J. 2002. Predecisional distortion of evidence as a consequence of real-time audit review. The Accounting Review 77 (1): 51–71.
https://doi.org/10.2308/accr.2002.77.1.51
Wolfe, C. J., B. E. Christensen, and S. D. Vandervelde. 2017. Can an Auditor’s Intuition Make Them More Skeptical When Performing
Impairment Testing? Working paper, Texas A&M University, University of Missouri, and University of South Carolina.
Zaichkowsky, J. L. 1985. Measuring the involvement construct. Journal of Consumer Research 12 (3): 341–352. https://doi.org/10.1086/
208520
Zhang, J. 1997. The nature of external representations in problem solving. Cognitive Science 21 (2): 179–217. https://doi.org/10.1207/
s15516709cog2102_3

The Accounting Review


Volume 93, Number 4, 2018
When Do Auditors Use Specialists’ Work to Improve Problem Representations of and Judgments about Complex Estimates? 201

APPENDIX A
Relational Cue Manipulation Used in Experiments 1 and 2
The abbreviated version of the specialist’s memo shown below illustrates the relational cue manipulation. The
unabbreviated memo is two pages long. In the relational cue condition, the box labeled ‘‘Additional Observations Made by
Specialist’’ appeared exactly as shown below on the memo’s second page. In the no relational cue condition, this box was
omitted and the rest of the memo was identical to the memo in the relational cue condition.

APPENDIX B
Engagement Risk Manipulations Used to Manipulate Risk Indicator in Experiment 2

Lower Engagement Risk Condition


Assume you are the in-charge senior on the audit of an electronics manufacturer, [client name]. [Client name] is privately
held. Your firm has audited [client name] since its inception 15 years ago.
[Client name] is narrowly in compliance with restrictive debt covenants for which waivers have been obtained in the past.
Management expects it will not be necessary to obtain a waiver of the debt covenants in the current year.

The Accounting Review


Volume 93, Number 4, 2018
202 Griffith

Higher Engagement Risk Condition


Assume you are the in-charge senior on the audit of an electronics manufacturer, [client name]. [Client name] is privately
held. Your firm is auditing [client name] for the first time this year. The client is contemplating going public this year and its
investment banker is ready to go to market with the IPO.
[Client name] is narrowly in compliance with restrictive debt covenants for which waivers have been obtained in the past.
However, the client’s lender is threatening not to grant a waiver of the debt covenants in the current year.

The Accounting Review


Volume 93, Number 4, 2018
Copyright of Accounting Review is the property of American Accounting Association and its
content may not be copied or emailed to multiple sites or posted to a listserv without the
copyright holder's express written permission. However, users may print, download, or email
articles for individual use.

You might also like