You are on page 1of 35

The Effect of Fraud Assessment Documentation Structure on Auditors Ability to Identify Control Weaknesses: The Moderating Role of Reviewer

Experience

Christopher P. Agoglia Department of Accounting Bennett S. LeBow College of Business Drexel University

Cathy Beaudoin Department of Accounting Bennett S. LeBow College of Business Drexel University

George T. Tsakumis Department of Accounting Bennett S. LeBow College of Business Drexel University 3141 Chestnut Street Philadelphia, PA 19104-2875 Telephone: (215) 895-2118 Fax: (215) 895-6279 E-mail: gtt22@drexel.edu

We would like to thank Todd DeZoort, Rick Hatfield, Carmelita Troy, and participants at the 2006 American Accounting Association Annual Meeting for their helpful comments.

The Effect of Fraud Assessment Documentation Structure on Auditors Ability to Identify Control Weaknesses: The Moderating Role of Reviewer Experience

ABSTRACT: The current regulatory environment, brought on by recent high-profile audit failures, expands the auditors role in detecting fraud. For example, auditors must now provide an opinion on clients internal controls, addressing their effectiveness at preventing or detecting fraud. While the structure of workpaper documentation has been shown to affect audit workpaper preparers assessments of overall fraud risk, prior research has not addressed the role their reviewers experience plays in mitigating documentation structure effects. Our study matches audit workpaper preparers with reviewers to investigate whether reviewer task-specific experience moderates the effect of fraud assessment documentation structure on the audit review teams ability to identify the presence of significant control weaknesses. Consistent with expectations, we find that preparers who are required to document components of their fraud assessments inappropriately provided more favorable (and lower quality) assessments of significant control weaknesses than those using either a supporting or balanced documentation structure. More importantly, results indicate that reviewer task-specific experience moderated the effect of documentation structure on reviewers identification of control weaknesses such that experienced reviewers compensated more for the effect of component documentation than reviewers with less experience. These results suggest that reviewer task-specific experience may help reduce the previously observed flow-through effect of preparer workpaper deficiencies on reviewer judgments and provide support for new regulations emphasizing the role of experience during the control assessment process. Keywords: review process; control environment; fraud assessment; control weakness; audit documentation; task-specific experience

Data availability:

Data are available upon request.

INTRODUCTION This study examines the role of reviewer task-specific experience in moderating the effect of fraud assessment documentation structure on the audit review teams ability to identify significant control weaknesses.1 Recent high-profile audit failures have prompted Congress and standard setting bodies to pass new regulations that emphasize and expand the auditors role in detecting and preventing fraud (e.g., the Sarbanes-Oxley Act of 2002 and Statement on Auditing Standards No. 99). Section 404 of the Sarbanes-Oxley Act now requires management of public companies to assess the effectiveness of their internal controls, and to include this assessment with their annual SEC filings. Auditors must also conduct their own independent assessment of, and issue an opinion on, the effectiveness of their clients internal controls. This increased workload has pushed public accounting firms to increase recruiting on college campuses (Arndt 2004; Gomez 2004; The Daily News of Los Angeles 2004). The resulting increase in the ratio of staff auditors to managers and partners puts additional focus on how firms can best utilize their experienced reviewers at a time when the Public Company Accounting Oversight Board (PCAOB) has emphasized the importance of auditor experience for assessing internal controls (PCAOB 2004a). As prior research suggests that preparer workpaper deficiencies can flow through the review process and negatively impact reviewer judgments (Ricchiute 1999; Agoglia et al. 2003; Tan and Trotman 2003), it is important to consider factors such as experience that may improve reviewer performance when workpaper deficiencies exist. Research on auditor experience indicates that task-specific experience often improves auditors judgments. Specifically, taskspecific experience obtained through task performance and review of others performance in an
Similar to Trotman (1985), we define a review team as consisting of a hierarchical pair of auditors: a subordinate auditor who prepares the workpapers (i.e., the preparer) and a supervising auditor who reviews this work (i.e., the reviewer), with the review teams efforts culminating in the judgments/decisions of the reviewing auditor.
1

area can lead to expert decision-making (Bonner 1990). Task-specific experience provides the opportunity for the development of enhanced knowledge structures, which can improve auditors decision-making effectiveness (Biggs et al. 1987; Shelton 1999). Therefore, a reviewers experience reviewing control environment assessments (i.e., task-specific experience) may influence his/her effectiveness in identifying control weaknesses that present opportunities for fraud. This is important because a key function of the review process is to help ensure the appropriateness of conclusions drawn by less experienced auditors (Shelton 1999). Thus, reviewer experience may be most critical in situations where the reviewer has to overcome shortcomings of workpaper documentation prepared by less experienced auditors (Libby and Trotman 1993). When auditors make assessments, they are typically required to document their conclusions in the audit workpapers. In practice, the format of such documentation may vary. Since this documentation may serve as a source of evidence in the event of litigation, it is important to consider the different ways in which auditors may structure such documentation (Koonce et al. 1995). Agoglia et al. (2003) show that varying the format of a justification memo affects the overall control environment assessments and evidence documented by audit workpaper preparers as well as the judgments of their reviewers. Our study extends prior research by examining whether reviewer task-specific experience moderates the effect of fraud assessment documentation structure on the review teams ability to identify specific control weaknesses that present opportunities for fraud (as is now required PCAOB Auditing Standard No. 5). We presented auditors (preparers) a case containing control environment evidence for a hypothetical client that was based on the control environment of an actual firm that had

experienced fraud. Preparers were asked to compile documentation regarding the clients control environment and to identify control environment weaknesses with regard to fraud. Specifically, preparers were required to document the control environments ability to prevent fraudulent activities using evidence to support their assessments (supporting documentation), using both important positive and important negative evidence about the control environment (balanced documentation), or using important positive and negative evidence regarding components of the control environment (component documentation). Preparers were then presented with ten fraud risk factors identified in SAS No. 99 (AICPA 2002) and asked to assess how likely each was to be a problem area for the client.2 A reviewer was paired with each preparer and asked to review the fraud assessment documentation. Reviewers then provided their own assessment of how likely each of the ten fraud risk factors was to be a problem with regard to the clients control environment. To investigate the moderating influence of reviewer task-specific experience, we must first establish an effect of documentation structure on preparers workpapers. Our results indicate preparers using the component documentation structure are less effective at identifying control weaknesses (i.e., inappropriately assessed control weaknesses more favorably) than those using either the supporting or balanced documentation structures. More importantly, the results indicate that task-specific reviewer experience plays a significant role in mitigating the effect of fraud assessment documentation structure on auditor fraud risk judgments. Specifically, reviewer experience moderates the effect of component documentation on the identification of control weaknesses. Relative to reviewers with less task-specific fraud assessment experience, reviewers with greater experience appear to be better able to overcome judgment difficulties encountered by their preparers.
2

Fraud risk factors represent potential weaknesses in controls with respect to the ability to prevent/detect fraud.

The findings of our study contribute to the literature in several important ways. For example, our results suggest that reviewer task-specific experience may help reduce the previously observed flow-through effect of preparer workpaper deficiencies on reviewer judgments (e.g., Ricchiute 1999; Agoglia et al. 2003; Tan and Trotman 2003). Thus, our results help validate the emphasis that recent pronouncements place on experience during the fraud risk and control assessment process (e.g., PCAOB 2004a; SAS No. 99). Further, while a number of prior reviewer experience studies have examined differences between reviewers of varying ranks (e.g., manager versus senior reviewer performance), few have considered (or found) differences in reviewer reactions to experimental manipulations within experience levels (e.g., Messier and Tubbs 1994). We find that experienced component reviewers react differently to their preparers assessments/documentation than their counterparts in the supporting or balanced documentation conditions. The remainder of this paper is organized as follows. The next section discusses the theoretical background and hypotheses. This is followed by a description of the research method and a presentation of the results. The final section offers conclusions and implications.

THEORY AND HYPOTHESES The Current Professional Environment The accounting profession is undergoing significant changes as a result of a number of high profile corporate scandals, including Enron, WorldCom, Tyco, and Adelphia Communications. Both Congress and the Auditing Standards Board (ASB) have acted to impose greater responsibilities on auditors with respect to fraud and internal controls. Congress acted with their passage of the Sarbanes-Oxley Act of 2002 and the ASB implemented SAS No. 99, Consideration of Fraud in a Financial Statement Audit (AICPA 2002). Under SAS No. 99,

auditors are required to gather and consider more information to assess fraud risk than they have in the past, as well as explicitly document their assessment in the workpapers. Among other responsibilities, SAS No. 99 requires that, when obtaining information about the client and its environment, the auditor should consider the presence of fraud risk factors. SAS No. 99 (para. 31) defines fraud risk factors as events or conditions that indicate incentives/pressures to perpetuate fraud, opportunities to carry out the fraud, or attitudes/rationalizations to justify fraudulent actions.3 Although fraud risk factors do not necessarily indicate that fraud is present, they often are present when fraud does exist and are, therefore, important elements to consider within the scope of an audit engagement. As part of compliance with Section 404 of the Sarbanes-Oxley Act, managers must evaluate the effectiveness of their internal control procedures and auditors must evaluate the accuracy of their clients assertions.4 PCAOB Auditing Standard (AS) No. 5 addresses the importance of controls over possible fraud. As a result of this standard, auditors conduct their own independent assessment of (and issue an opinion on) internal controls, with respect to their effectiveness at preventing or detecting fraud that may result in material misstatement of the financial statements. Thus, while it has always been important to the performance of the audit, the auditors responsibility for identifying and documenting control weaknesses has increased substantially in the new regulatory environment. Further, PCAOB AS No. 3 addresses audit documentation. AS No. 3 states that auditors who prepare (e.g., preparers) audit documentation should provide sufficient information to enable an experienced auditor (e.g., reviewer) to
SAS No. 99 provides examples of fraud risk factors related to fraudulent financial reporting (e.g., revenue recognition policies and management estimate issues) and misappropriation of assets (e.g., inadequate controls over cash and inventory items). 4 Internal control assessment involves ensuring that steps are in place to prevent or detect the theft or unauthorized use of the companys assets to the extent that such prohibited acts could result in a material effect on the financial statements. The control environment, a component of internal control, sets the tone of the organization, influencing the control consciousness of its people and is the foundation upon which all other internal controls rest (AICPA 1995, para. 25).
3

understand the procedures performed, evidence obtained, and conclusions reached, including relevant information inconsistent with conclusions (PCAOB 2004b). This recent guidance highlights the importance of audit documentation quality and its significance for those who review the documentation.

Alternative Documentation Structures Auditors are typically required to document their conclusions in the workpapers, which will later be scrutinized by those supervising their work. Audit workpapers contain documentation relating to various aspects of the audit such as planning, internal control evaluations, and audit procedures performed. The form this documentation takes, however, can vary in practice. Given that this documentation typically provides rationale for the auditors opinion and often serves as key sources of legal evidence in the event of litigation (Koonce et al. 1995), it is important to consider the potential effect of the documentations structure on audit judgment. Related research has focused on comparing the differences in judgments between auditors required, or not required, to justify their decisions (e.g., Johnson and Kaplan 1991; Koonce et al. 1995; Peecher 1996; Hoffman and Patton 1997). Although this research indicates that justification can affect audit judgments, it does not examine the effects of how that documented justification is structured. A recent study examines the effects of alternatively structured justification memos on audit judgments. Agoglia et al. (2003) find that the format of these justifications (i.e., how the workpapers require them to be structured) can affect the overall fraud risk assessments of auditors preparing this documentation as well as those of auditors reviewing their work. Following Agoglia et al. (2003), we investigate three different structures in which preparers can document their fraud assessments: (1) supporting documentation, which

requires preparers to provide evidence supporting their conclusions; (2) balanced documentation, which requires preparers to document important positive and negative evidence (e.g., both strengths and weaknesses of a clients control environment); and (3) component documentation, which requires preparers to document important positive and negative information for components of their task (e.g., strengths and weaknesses of components of the control environment).5 The first two represent more holistic approaches to documentation, while the third can be thought of as a decomposition of the workpapers into smaller parts. The literature on judgment decomposition reveals some relative advantages and disadvantages of breaking judgments down into components, compared to holistic judgments (e.g., Einhorn 1972; Gettys et al. 1973; Slovic et al. 1977; Jiambalvo and Waller 1984; Jako and Murphy 1990). For example, attending to a single component at a time may allow the individual to consider more information regarding the particular component than when making a more holistic judgment (Armstrong et al. 1975; Ravinder et al. 1988). Somewhat ironically, however, this increase in total information considered could also result in a shift in focus away from critical items toward this additional, and often less relevant, evidence. Cornelius and Lyness (1980) demonstrate that judgment decomposition can force individuals to attend to inappropriate/irrelevant information and to use more information than is necessary for the overall judgment, resulting in less effective performance. Further, it appears that individuals are not particularly good at combining portions of a judgment into an integrated whole. Prior research suggests that requiring individuals to integrate their component judgments into overall judgments can diminish, or even reverse, the benefits of decomposition (e.g., Cornelius and Lyness 1980;
While not intended to represent an exhaustive list of possible documentation structures, these three documentation structures were chosen to be consistent with prior research and because they represent structures similar to those that have been used in practice (Agoglia et al. 2003). Further, discussions with practicing auditors and an examination of a sample of workpapers from international public accounting firms suggests that these structures are similar to those auditors are familiar with in practice.
5

Lyness and Cornelius 1982; Jiambalvo and Waller 1984). Thus, component documentation, which similarly requires preparer integration of smaller parts into a greater whole, may not compare favorably to more holistic approaches like balanced and supporting documentation. The results of Agoglia et al. (2003) support this notion. They find that component justification memos result in the greatest amount of evidence documented and the lowest (i.e., most favorable) overall fraud risk levels assessed by preparers, relative to balanced and supporting memos. They attribute this result to the fact that auditors using component memos documented more total evidence items than auditors in the other memo conditions. If a large amount of evidence is documented, the relative weight given to each evidence item is likely to decrease (Pincus 1989), which may affect an auditors judgments when the proportion of positive and negative evidence is imbalanced. For example, if a clients control environment has only a small number of weaknesses, the relative weight given to these weaknesses is likely to decrease as the overall set of evidence considered increases (e.g., Hackenbrack 1992; Glover 1997; Hoffman and Patton 1997; Shelton 1999). Given that even troubled clients typically have a greater proportion of positive control environment characteristics than negative characteristics (Agoglia et al. 2003), the increased documentation requirements for our component documentation structures will tend to result in a greater focus on positive evidence. In turn, component documentation preparers may be less likely to identify significant control weaknesses as areas of concern than supporting or balanced preparers. Thus, we expect that documentation structure will affect preparers ability to identify specific control weaknesses in much the same way as it has been found to affect overall assessments of the control environment. It is necessary to first establish that this effect exists in

order to investigate the role reviewer experience plays in moderating it, our primary focus in this study. Therefore, we test the following hypothesis to establish the requisite effect: H1: Component documentation preparers will assess control environment weaknesses more favorably than preparers using supporting and balanced documentation.

Reviewer Task-Specific Experience When auditors make assessments, they are typically required to document their conclusions in the audit workpapers, which are subject to review by a supervising auditor (Emby and Gibbins 1988; Brazel et al. 2004). One of the main functions of the review process is to ensure workpaper quality (i.e., the adequateness of procedures performed and appropriateness of conclusions drawn) (Libby and Trotman 1993; AICPA 2006b). However, this quality control function becomes a more challenging task for reviewers when there are shortcomings in their preparers workpapers which they must overcome. Prior research suggests that biases and deficiencies in the workpapers can flow through the review process and negatively impact reviewer judgments (Ricchiute 1999; Agoglia et al. 2003; Tan and Trotman 2003; Agoglia et al. 2007). For example, Ricchiute (1999) finds that receiving different (biased) subsets of a larger evidence set can bias partners going concern decisions, and Agoglia et al. (2003) demonstrate that altering the format of a justification memo can affect preparers judgments and, in turn, that this effect persists through the review process to influence reviewer judgments. These prior studies, however, do not investigate potential factors that may mitigate workpaper deficiencies during the review (i.e., factors that reduce the observed flow-through effect). Reviewer experience is an important variable to consider since a primary function of the hierarchical review process is to reduce the likelihood of the audit being compromised by the judgments of less-experienced auditors (Solomon 1987; Shelton 1999; Brazel et al. 2004).

Research on auditor experience indicates that auditors judgments often improve with greater experience. Bonner and Lewis (1990) show that auditors with more experience generally perform more effectively than auditors with less experience. Experience provides an opportunity for the acquisition of relevant technical knowledge, which is essential for improving task performance (Libby 1995). As a result, auditors with more experienced-based knowledge usually make better decisions than auditors with less (Libby and Luft 1993). For example, Knapp and Knapp (2001) show that, with greater levels of experience, auditors become more effective at assessing the risk of financial statement fraud. Prior research also suggests that task-specific experience improves auditors judgments. Task-specific experience, obtained through exposure to an area, can lead to expert decision making (Bonner 1990). As a result of their well-developed knowledge structures, expert auditors tend to use directed strategies to acquire information pertinent to a specific decision or task, resulting in more effective decision making (Biggs et al. 1987; Shelton 1999).6 The judgment and decision-making literature suggests that experienced individuals are better able to integrate large or complex data sets for evaluative tasks due to their more fully developed knowledge structures (e.g., Larkin et al. 1980; Patel et al. 1986; Hassebrock et al. 1993; Wiggins and OHare 1995; Sohn and Doane 2004). For example, Wiggins and OHare (1995) show that individuals experienced at an evaluative task are more likely to go beyond

However, while experience has been shown to improve auditor performance in general, there are some exceptions to these findings (e.g., Monroe and Ng 2000; Chung and Monroe 2000). For example, Choo and Trotman (1991) find that, while their performance is superior on certain recall measures, experienced auditors are more likely than inexperienced auditors to incorrectly recall atypical items that were not part of the original evidence set. Studies specifically examining the role of reviewer experience in audit workpaper review have produced somewhat mixed results as well. For example, Ramsay (1994) and Harding and Trotman (1999) find that, compared to lower ranking reviewers, higher ranking reviewers (e.g., partners versus managers) are better at detecting conceptual errors but not as effective at detecting mechanical errors, and Ballou (2001) finds no benefit of reviewer experience in his setting. Further, prior reviewer experience studies have typically not found differences in reviewer reactions to experimental manipulations within experience levels (as we expect in our study). For example, Messier and Tubbs (1994) find no significant differences in judgment for experienced reviewers (managers) in review and no-review conditions.

10

surface representation of the problem/task and to use more structured and efficient information search strategies than their less experienced counterparts. Patel et al. (1986) and Hassebrock et al. (1993) find that, through more fully developed knowledge structures, experienced individuals tend to focus more on information relevant to a decision than those with less experience. Thus, the knowledge structures developed through task-specific experience (e.g., experience performing and reviewing evaluations of the effectiveness of a clients control environment) should help reviewers to focus their reviews on more relevant evidence items (Biggs et al. 1987; Shelton 1999), allowing them to better identify the true nature of a specific fraud risk factor(s). Reviewers with greater task-specific experience are likely to be less influenced by their preparers conclusions/documentation (i.e., better equipped to formulate an independent evaluation of the evidence), particularly in situations where the preparers assessment does not appropriately reflect conditions at the client. This notion is consistent with Brazel and Agoglia (2007), who view the auditor risk assessment process as a belief revision task, with a prior assessment serving as a starting point, or anchor. This anchor is then revised, often insufficiently, to create a current assessment. In cases where preparer assessments and workpapers do not fully reflect conditions at the client, reviewers with lower task-specific experience may be less able to properly integrate evidence and appropriately assess the impact of specific fraud risk factors on the firms control environment given their less-developed knowledge structures. Therefore, these reviewers may be more likely to anchor on their preparers fraud risk factor assessments and, in turn, their assessments may deviate less from those of their preparers. In contrast, more experienced reviewers knowledge structures should enable them to better integrate evidence to identify and react to specific weaknesses affecting the firms control environment, resulting in assessments that deviate farther from their preparers

11

assessments (when preparers assessments do not appropriately reflect current conditions) than those of less experienced reviewers. Therefore, while prior findings suggest that inadequacies in preparer workpapers tend to flow through to reviewer judgments (Ricchiute 1999; Agoglia et al. 2003; Agoglia et al. 2007), more experienced reviewers knowledge structures may allow them to more effectively overcome the challenges presented by documentation structure and better assess the impact of fraud risk factors on the firms control environment than their less experienced counterparts. With respect to the three documentation structures investigated here, more experienced reviewers may be better equipped to overcome the potential oversights of preparers in the component documentation condition (i.e., relative to those with less experience, experienced component reviewers may be more likely to identify control weaknesses that their preparers may have overlooked). If the supporting and balanced conditions result in better preparer documentation and assessments, then there is less of a burden on the reviewer and reviewer experience becomes less of a factor. Thus, we expect that reviewer task-specific experience will moderate the effect of documentation structure on the difference between preparer-reviewer control weakness assessments. That is, given the requisite preparer documentation effect anticipated by H1, we expect preparer-reviewer assessment differences to be greatest when preparers document their assessments using component documentation and the reviewer is more experienced. The following hypothesis is, therefore, tested: H2: As reviewer task-specific experience increases, differences between preparer and reviewer assessments of control environment weaknesses will be greater for component documentation audit teams than for the supporting and balanced documentation audit teams.

12

METHOD Participants One hundred and eight practicing auditors from large international accounting firms participated in the study (54 as preparers and 54 as reviewers). Auditors participating as preparers were generally audit seniors with an average of 4.0 years experience, while auditors participating as reviewers were generally audit managers with an average of 8.7 years experience. Discussions with audit partners indicate that auditors with these titles and levels of experience are familiar with evaluating control environments and reviewing these evaluations, respectively. Mean years of audit experience were not significantly different between experimental conditions for either preparer or reviewer participants.7 Participants completed the experiment at their offices.

Experimental Case The experimental materials relate to the control environment of a hypothetical client and are based on the audit of an actual company that experienced a misappropriation of assets (i.e., fraud). We choose this setting for our experiment given a number of recent high-profile audit failures in which weak aspects of the control environment were unable to prevent a misappropriation of assets (e.g., Tyco International, Adelphia Communications, and PattersonUTI Energy (Glovin 2003; Howe 2002; Blaney 2005)). Our case is derived from materials developed and employed by Agoglia et al. (2003) and updated, where necessary, to reflect the current auditing environment. While much of the evidence presented in the case is similar, the dependent variables we gather from participants are different than in their study. In our study, we

Demographic variables including familiarity with authoritative guidance, effort spent on the task, and pressure to perform the task were not significantly different between groups and did not have a significant effect on the overall findings (ps > .40).

13

collect data relating to the identification of certain fraud risk factors, rather than more holistic assessments of fraud. We choose to investigate the identification of specific fraud risk factors in a control environment setting given the relevance of the control environment to an entitys ability to prevent, deter, mitigate, and/or detect fraud. SAS No. 106 (AICPA 2006a) states that the auditor must obtain a sufficient understanding of the entity and its environment, including the control environment, to assess the risk of material misstatement due to fraud. Further, SAS No. 78 (AICPA 1995) indicates that controls intended to address the risks of fraud should be considered when assessing internal controls, and that an effective control environment can help reduce the risk of fraud. SAS No. 99 (AICPA 2002) requires explicit assessment of the risk of material misstatement due to fraud (from both misappropriation of assets and fraudulent financial reporting) and identifies three conditions that are generally present when fraud exists: incentive/pressure for management to commit fraud, opportunity for fraud to be perpetrated, and a culture or environment that enables management to rationalize committing fraud. Evidence gathered to assess an entitys control environment can be useful in determining the presence/absence of these three conditions. In fact, SAS No. 99 explicitly recommends the consideration of a number of fraud risk factors specifically relating to the control environment when assessing the risk of fraud at the client. The evidence presented in our case is designed to provide information relevant to determining the existence of these three conditions for fraud, and the fraud risk factors that our participants consider directly relate to these conditions. We selected ten fraud risk factors for inclusion in our study. The ten factors come from SAS No. 99 and relate to the control environment. Two individuals involved with the actual audit examined the case materials to identify whether each fraud risk factor was likely or

14

unlikely to be a problem area with respect to the control environments ability to prevent fraud. Based on the evidence provided in the case materials, four of the ten fraud risk factors were determined to be serious problem areas (i.e., significant weaknesses), while the remaining six were determined to be areas of strength.8 Fraud risk factor categorization as a weakness or strength was confirmed by three experts (audit partners) not involved with the actual audit engagement. These expert responses were used to determine the appropriateness of participants fraud risk assessments. The case materials included background information on the client and detailed information regarding the clients control environment. The information was presented in the form of audit team member comments provided across the seven control environment dimensions incorporated in SAS No. 78 (AICPA 1995). The seven dimensions are: integrity and ethical values, commitment to competence, board of directors and audit committee, managements philosophy and management style, organizational structure, the assignment of authority and responsibility, and human resource policies and practices. The evidence set presented to participants was extensive, containing 126 separate evidence items.

Preparer Task Preparers were randomly assigned to a fraud assessment documentation structure condition, provided a case booklet, and required to prepare and document an assessment of the control environments ability to prevent fraudulent activities. The instructions required preparers to structure their assessment documentation in one of three ways: using evidence that supports their assessment of the clients control environment (supporting documentation), using both

The four problem areas are: (1) managements attitude toward overriding controls, (2) degree of oversight related to the companys control structure exercised by management, (3) controls related to safeguarding of assets, and (4) segregation of duties, particularly for personnel in key functions.

15

positive and negative evidence about the control environment (balanced documentation), and using positive and negative evidence about components of the control environment (component documentation). We examined a sample of workpapers from international audit firms to ensure that participants would be familiar with these structures. Our examination suggests that: a) auditors are commonly required to document evidence for components of a larger judgment, b) auditors are often asked to document evidence in support of their judgments, and c) documentation of important inconsistent evidence may be requested (which is noted in PCAOB AS No. 3). Discussions with auditors from international firms also suggest that they are regularly asked to provide documentation in various formats and that our structures are familiar to these auditors.9 After completing the documentation, preparers assessed the impact of the ten randomly ordered specific fraud risk factors (six control strengths and four control weaknesses, discussed above) on the control environments ability to prevent fraud. Specifically, preparers were asked whether each factor was likely or unlikely to be a problem area. Responses were made on tenpoint scales, with endpoints labeled highly unlikely to be a problem area (coded as 10) and highly likely to be a problem area (coded as 1). Preparer participants then responded to a series of demographic and case-related questions.

Reviewer Task Reviewers received the same client background and control environment information as preparers. They were randomly matched with a preparer and reviewed that individuals fraud

Additionally, while we received informal feedback from many participants upon completion of the study, none mentioned any discomfort with the format of the documentation. We also analyze a measure of effort preparer and reviewer participants spent on their respective tasks to help provide assurances that our results are not driven by participants familiarity with the assigned documentation structure (e.g., one might expect a participant to report spending more effort on a workpaper in which the documentation structure was unfamiliar). Analyses using this variable indicate that there are no differences between documentation structure groups on this variable and that including it as a control variable does not alter our conclusions. While these additional insights are not conclusive, they provide us with greater confidence that a novelty effect is not at play in our results.

16

assessment documentation, which had been structured using one of the three documentation conditions. After reviewing their preparers control environment assessment documentation, reviewers were provided with a list of the ten specific fraud risk factors, along with their preparers assessments of these factors. Reviewers were asked to assess whether each of the ten fraud risk factors was a potential problem area for the client on the same ten-point scales as those utilized by the preparers. Like the preparer participants, reviewers also responded to a series of demographic and case-related questions, including a measure of task-specific experience.

RESULTS Hypothesis One Hypothesis 1 predicts that component documentation preparers would assess control environment weaknesses as less problematic (with regard to fraud) for the client than preparers using supporting or balanced documentation. To test H1, we analyzed preparers responses in a 1 x 3 ANOVA with documentation structure (supporting, balanced, or component) as the independent variable and participants mean assessments of four control environment weaknesses as the dependent variable. Participants indicated how likely each factor was to be a problem with regard to the control environments ability to prevent fraud on a 10-point scale (where 1 = highly likely to be a problem area and 10 = highly unlikely to be a problem area). Thus, lower (higher) scores indicate that the participant perceived the control environment as less likely (more likely) to prevent fraudulent activities associated with that risk factor. Table 1 presents participants assessments of the four control environment weaknesses. [Insert Table 1] Panel A of Table 1 shows that preparers mean assessments (across the four control environment weaknesses) were 3.89, 3.94, and 5.61, respectively, for the supporting, balanced,

17

and component documentation groups (F = 7.930, p = .001). Contrast tests presented in Panel B indicate that the mean assessment of the component group was significantly higher (i.e., component preparers assessed the four factors as lower risk) than the balanced and supporting groups (ps < .001). Similar results are observed for each of the four control weaknesses individually (see Table 1). Consistent with H1, these data suggest that the component documentation preparers inappropriately viewed control weaknesses more favorably than preparers in the supporting and balanced groups. Specifically, preparers using component documentation indicated that there was a lower likelihood that these control environment weaknesses would be a problem with regard to the control environments ability to prevent fraud. It appears that, consistent with the development of H1, the increased documentation requirements for the component documentation structure resulted in a greater focus on positive evidence, in turn leading to more favorable assessments of control weaknesses observed for component group preparers. We find that preparers in the component group documented significantly more positive evidence items (62.8% of their total documented evidence), on average, than either the supporting or balanced groups (43.5% and 49.6%, p = .001 and p = .023, respectively).10 In addition, we examine the quality of the preparers assessment. Similar to Tan (1995), assessment quality is measured as the absolute deviation of preparer assessments from expert assessments of the four control environment weaknesses, where more (less) deviation from expert assessments indicates lower (higher) preparer assessment quality.11 Preparers mean
10

On average, preparers in the component group documented 51.9 total evidence items (33.4 positive items), while those in the supporting and balanced groups documented 21.0 (7.9 positive) and 27.9 (14.6 positive) items, respectively. 11 Expert assessments came from three experts (audit partners) not involved with the actual audit engagement upon which the experimental case is based. For each participant, absolute deviations are calculated for each of the four control weaknesses individually and then averaged across the four items to produce a mean absolute deviation from expert assessments (i.e., our measure of assessment quality).

18

absolute deviations from experts assessments were 1.69, 1.64, and 2.89 for the supporting, balanced, and component groups, respectively. Contrast tests indicate that component preparers mean absolute deviations from the experts was significantly higher than both the supporting (p = .003) and balanced groups (p = .002). Thus, it appears that not only were component preparers control weakness assessments more favorable than those of preparers in the supporting and balanced groups, they were also of lower quality. Recall that preparers were also asked to assess the impact of six control strengths on the control environments ability to prevent fraud. Interestingly, and contrary to what we found for the four control weaknesses, documentation structure had no significant affect on preparers assessments of the control strengths (means = 6.89, 7.43, and 7.28 for the supporting, balanced, and component groups, respectively; p = .492). These results indicate that the preparers in the supporting and balanced conditions were not simply more conservative across the board (i.e., all three groups were equally effective at recognizing control strengths), but that they were better able to selectively direct attention to areas of weakness (i.e., they were better able to identify and appropriately assess the weaknesses present at the hypothetical client) than those in the component group.

Hypothesis Two Given the H1 results, which establish the prerequisite effect of documentation structure on preparers assessments of control weaknesses, we now turn our attention to H2 and the role of reviewer task-specific experience in moderating the effect of documentation structure. To test H2, we ran the following multivariate regression:

DIFFi = a0 + a1TSE + a2COM + a3TSE*COM + e

(1)

19

DIFF represents the difference between preparer and reviewer responses. Specifically, DIFF is calculated as the preparers minus the reviewers mean assessment for the four control weaknesses. TSE represents reviewers task-specific experience.12 Specifically, reviewers were asked how much experience they had in reviewing evaluations of the effectiveness of a clients control environment. Responses were made on a 9-point scale, with endpoints labeled very extensive experience (coded as 9) and very limited experience (coded as 1). Documentation structure was dummy coded. COM is coded as 1 for component documentation and 0 otherwise. Thus, the supporting and balanced groups serve as the baseline conditions since hypothesized differences relate to comparisons between them and the component group. TSE*COM represents the interaction between reviewer task-specific experience and documentation structure. Table 2 reports the results from estimating the multiple regression model specified in equation (1). Hypothesis 2 predicts that as reviewer task-specific experience (TSE) increases, differences in preparer and reviewer assessments of control weaknesses will be greater for the component documentation pairings than for the supporting and balanced audit teams. That is, experience has the greatest impact when preparers are less effective at identifying significant weaknesses in the clients control environment. Given this hypothesized effect of reviewer experience, we expect (and find) that component reviewers mean assessments begin to converge toward those of the reviewers in other conditions (means = 4.04, 3.92, and 4.60 for the supporting, balanced, and component groups, respectively; p = .396). With respect to a direct test
12

While it is generally preferable to manipulate independent variables to take better advantage of the benefits offered by experimental designs, one cannot easily manipulate factors such as forms of intelligence or expertise (Peecher and Solomon 2001), typically necessitating measurement of such variables. Ideally, given that our TSE variable was to be measured, we would have preferred to use an independently (or researcher) assessed measure of reviewer expertise at performing the specific task. However, since an observable measure of expertise reviewing control environment evaluations would have been infeasible to obtain (Abdolmohammadi and Shanteau 1992), we used a self-reported measure of task-specific experience as a surrogate for actual participant expertise (similar to Bonner and Lewiss (1990) measures of control and ratio knowledge; see also DeZoort and Salterio 2001; ODonnell 2002; Brazel and Agoglia 2007).

20

of H2, we expect a significant positive coefficient for a3. Table 2 shows that the coefficient for the interaction term (TSE*COM) is in the expected positive direction and statistically significant (p = .008), providing support for H2.13, 14 Interestingly, when a general measure of experience (years of audit experience) is substituted for TSE in the model, the interaction term is no longer significant (p = .146). Thus, it appears that experience specific to the task (reviewing control environment assessments), and not generic experience, is critical to reviewers being able to overcome the challenges presented by documentation structure. [Insert Table 2] Although not hypothesized, we find a similar interactive effect for reviewers assessment quality. As with preparer assessment quality, reviewer assessment quality is measured by computing the mean of absolute deviations of reviewers assessments from experts assessments across the four control environment weaknesses. Using an equation similar to equation (1) in which reviewer assessment quality is substituted for DIFF as the dependent variable, we find that task-specific experience (TSE) and documentation structure have an interactive effect on reviewer assessment quality (TSE*COM is significant at p = .017). That is, as task-specific experience (TSE) increases, component documentation reviewers assessment quality is less affected by documentation structure. Thus, relative to less experienced component reviewers, it appears that not only were experienced reviewers control weakness judgments less affected by (anchored on) their preparers more favorable assessments, they were also of higher quality.
Altering the model to allow us to compare the component group directly to each of the other groups produces significant interaction coefficients for the component-balanced comparison (p = .029) and the component-supporting comparison (p = .024). Thus, H2 holds not only for comparisons between the component group and the balanced and supporting groups combined, but also for individual comparisons (i.e., component versus balanced and component versus supporting). 14 We dichotomize the component group at the median level of reviewer task-specific experience as a further illustration of the effect of task-specific experience. Resulting mean control weakness assessments for the high and low experience reviewers are 3.79 and 5.03, respectively, suggesting that it is the experienced reviewers that are driving this effect. Further, experienced reviewers tended to deviate farther from their preparers assessments (2.25 lower than their preparers, on average) than less experienced reviewers (0.31 lower).
13

21

DISCUSSION The current regulatory environment (e.g., the Sarbanes-Oxley Act of 2002 and SAS No. 99), brought on by recent high-profile audit failures, emphasizes and expands the auditors role in detecting and preventing fraud. Sarbanes-Oxley requires management of publicly-traded companies to assess the effectiveness of their internal controls, and to include this assessment with their annual SEC filings. Auditors must also conduct their own independent assessment of, and issue an opinion on, the effectiveness of their clients internal controls. This increased workload in the new regulatory environment has pushed public accounting firms to dramatically increase recruiting, resulting in an increase in the ratio of audit staff to audit managers and partners at a time when regulatory agencies are recognizing experience as playing a crucial role in the effective assessment of internal controls and fraud risk (AICPA 2002; PCAOB 2004a). As prior research suggests that preparer workpaper deficiencies can flow through the review process and negatively impact reviewer judgments (Ricchiute 1999; Agoglia et al. 2003; Tan and Trotman 2003; Agoglia et al. 2007), it is important to consider factors such as experience that may improve reviewer performance when workpaper deficiencies exist. Prior research provides some support for the importance of auditor experience, showing that experience obtained through specific task performance can lead to improved decision-making (e.g., Bonner 1990). And while the structure of workpaper documentation has been shown to affect auditors assessments of overall fraud risk (Agoglia et al. 2003), prior research has not addressed the role their reviewers experience plays in mitigating documentation structure effects. Our study matches audit workpaper preparers with reviewers to investigate whether reviewer task-specific experience moderates the effect of fraud assessment documentation structure on the audit review teams ability to identify significant control weaknesses. Consistent

22

with expectations, we find that preparers using component documentation inappropriately assess weaknesses in the control environment more favorably than those using either supporting or balanced documentation, resulting in lower quality assessments for the component preparers. More importantly, we find that reviewer task-specific experience moderates the effect of documentation structure on reviewers identification of control weaknesses. Specifically, experienced component reviewers react differently than their experienced balanced and supporting counterparts by compensating more for the effect of documentation structure, while less experienced component reviewers are unable to make this adjustment. As with all research, our studys limitations should be considered when evaluating the findings. For example, due to logistical impracticalities we utilize a self-reported measure of reviewer task-specific experience rather than an observable measure, likely introducing noise into the measurement of task-specific experience. Further refinement of the reviewer taskspecific experience measure might allow for more precise conclusions. Additionally, our study examines only three types of documentation structures and, thus, cannot speak to the effects of other documentation structures that may be found in practice. Investigating other structures may merit further research. We also investigate only a single task/context (i.e., we examine the effects of documentation structure and reviewer experience in a fraud risk/control weakness assessment setting). Future research could consider reviewer task-specific experience in other contexts and review tasks to determine under which tasks/contexts this experience is most critical. Further, experimental studies typically constrain the amount of information they incorporate in the experimental materials, and our study is no exception. While the case we provide our participants is extensive, it does not represent the all the evidence auditors may have at their disposal when

23

making such decisions and, thus, cannot fully capture the complexity of judgments that occur in practice. The findings of this study have implications for practice and future research. From a practice standpoint, given the increased expectations with respect to assessing controls facing audit firms today, results of this study suggest that they should consider the effect of how their workpapers relating to the assessment of control weaknesses are structured. For example, SAS No. 99 (AICPA 2002) repeatedly cautions the auditor to maintain a healthy level of professional skepticism and explicitly discusses potential weaknesses and fraud risk factors relating to the control environment to be on the lookout for. The standard emphasizes the need to exercise professional skepticism in gathering and evaluating evidence throughout the audit and to continually be alert for information or conditions that indicate a material misstatement due to fraud may have occurred. This heavy focus on identifying weaknesses may not be well suited to a component documentation structure, as component documentation tends to result in more detailed documentation (i.e., more evidence, in general, including strengths). Thus, component documentation may actually result in both less effective and less efficient fraud risk workpapers than other, less detailed approaches. In addition, while few studies consider differences in reviewer reactions to experimental manipulations within experience levels (e.g., Messier and Tubbs 1994), we find that experienced reviewers react differently to their preparers workpapers depending on the documentation structure employed. Future research could investigate whether factors other than documentation structure affect the judgments/reactions of reviewers within specific experience levels. Also, our results suggest that reviewer task-specific experience may help reduce the previously observed flow-through effect of preparer workpaper deficiencies on reviewer judgments (e.g., Ricchiute

24

1999; Agoglia et al. 2003). Our findings indicate that reviewers with greater task-specific experience (relative to those of lesser experience) appear better suited to overcome their preparers potential control weakness omissions in the workpapers. SAS No. 99 specifies that personnel should be assigned engagement responsibilities based on a match between the taskspecific requirements and the knowledge, skill, and abilities of the auditor. Consistent with this guidance, our results suggest that firms should assign more experienced reviewers to fraud risk assessment reviews, particularly when the risk of material misstatement due to fraud is high. Thus, our results help reinforce that the emphasis on experience during the fraud risk and control assessment processes prescribed by recent pronouncements (e.g., AICPA 2002; PCAOB 2004a) is well placed. Further, given new requirements relating to internal controls, future research could investigate the effect of documentation structure and reviewer experience on the auditors internal controls opinion decision. Such research will further our understanding of the effect of documentation quality and structure on auditor judgment.

25

REFERENCES Abdolmohammadi, M.J. and J. Shanteau. 1992. Personal attributes of expert auditors. Organizational Behavior and Human Decision Processes 53 (November): 158-172. Agoglia, C.P., T. Kida, and D.M. Hanno, D.M. 2003. The effects of justification memos on the judgments of audit reviewees and reviewers. Journal of Accounting Research 41 (March): 33-46. Agoglia, C.P., R.C. Hatfield, and J.F. Brazel. 2007. The effects of audit review format on audit team judgments. Working Paper. Drexel University. American Institute of Certified Public Accountants (AICPA), 1995. Consideration of Internal Control in a Financial Statement Audit, Statement on Auditing Standards No. 78. New York, NY: AICPA. American Institute of Certified Public Accountants (AICPA), 2002. Consideration of Fraud in Financial Statements, Statement on Auditing Standards No. 99. New York, NY: AICPA. American Institute of Certified Public Accountants (AICPA), 2006a. Audit Evidence, Statement on Auditing Standards No. 106. New York, NY: AICPA. American Institute of Certified Public Accountants (AICPA), 2006b. Planning and Supervision, Statement on Auditing Standards No. 108. New York, NY: AICPA. Armstrong, J.S., W.B. Denniston, and M.M. Gordon. 1975. The use of decomposition principle in making judgments. Organizational Behavior and Human Performance 14 (October): 257-263. Arndt, M. 2004. Accountings beautiful losers; Big firms and small battled Sarbanes-Oxley and failed. Now theyre finding its a source of unexpectedly lucrative rewards. Business Week Online (November 10th). Ballou, B. 2001. The relationship between auditor characteristics and the nature of review notes for analytical procedure working papers. Behavioral Research in Accounting 13: 25-48. Biggs, S.F., W.F. Messier, Jr., and J.V. Hansen. 1987. A descriptive analysis of computer audit specialists decision-making behavior in advanced computer environments. Auditing: A Journal of Practice & Theory 6 (Spring): 1-21. Blaney, B. 2005. SEC sues Patterson-UTIs former CFO, judge freezes assets. The Associated Press (November 17th). Bonner, S. 1990 Experience effects in auditing: the role of task-specific knowledge. The Accounting Review 65 (January): 72-92.

26

Bonner, S.E. and B. Lewis. 1990 Determinants of auditor experience. Journal of Accounting Research 28 (Supplement): 1-20. Brazel, J.F., and C.P. Agoglia. 2007. An examination of auditor planning judgments in a complex accounting information system environment. Contemporary Accounting Research 24 (Winter): forthcoming. Brazel, J.F, C.P. Agoglia, and R.C. Hatfield. 2004. Electronic versus face-to-face review: the effects of alternative forms of review on Auditors performance. The Accounting Review 79 (October): 949 967. Choo, F. and K.T. Trotman. 1991. The relationship between knowledge structure and judgments for experienced and inexperienced auditors. The Accounting Review 66 (July): 464-485. Chung, J. and G.S. Monroe. 2000. The effects of experience and task difficulty on accuracy and confidence assessments of auditors. Accounting and Finance 40 (July): 135-151. Cornelius III, E.T. and K.S. Lyness. 1980. A comparison of holistic and decomposed judgment strategies in job analyses by job incumbents. Journal of Applied Psychology 65 (2): 155163. DeZoort, F.T. and S.E. Salterio. 2001. The effects of corporate governance experience and financial reporting and audit knowledge on audit committee members judgments. Auditing: A Journal of Practice & Theory 20 (September): 31-47. Einhorn, H.J. 1972. Expert measurement and mechanical combination. Organizational Behavior and Human Performance 7 (February): 86-106. Emby, C. and M. Gibbins. 1988. Good judgment in public accounting: Quality and justification. Contemporary Accounting Research 4 (Spring): 287-313. Gettys, S.T., C. Michel, J.H. Steiger, C.W. Kelly, and C.R. Peterson. 1973. Multiple-stage probabilistic information processing. Organizational Behavior and Human Performance 10 (December): 374-387. Glover, S. 1997. The influence of time pressure and accountability on auditors processing of non-diagnostic information. Journal of Accounting Research 35 (Autumn): 213-226. Glovin, D. 2003. Tyco sues former CFO for US$400M: Mark Swartz: Improperly took funds and assets, lawsuit says. Bloomberg News (April 3rd). Gomez, H. 2004. Accounting firms add to ranks; Some turn away business as Sarbanes-Oxleys impact strains staffs. Crains Cleveland Business (September 13th). Hackenbrack, K. 1992. Implications of seemingly irrelevant evidence in audit judgment. Journal of Accounting Research 30 (Spring): 126-136.

27

Harding, N. and K.T. Trotman. 1999. Hierarchical differences in audit workpaper review performance. Contemporary Accounting Research 16 (Winter): 671-684. Hassebrock, F., P.E. Johnson, P. Bullemer, P.W. Fox, and J.H. Moller. 1993. When less is more: Representation and selective memory in expert problem solving. The American Journal of Psychology 106 (Summer): 155-189. Hoffman, V.B. and J.M. Patton. 1997. Accountability, the dilution effect, and conservatism in auditors fraud judgments. Journal of Accounting Research 35 (Autumn): 227-237. Howe, P.J. 2002. Adelphia communications executives indicted on fraud, conspiracy charges. Boston Globe (September 24th). Jako, R.A. and K.R. Murphy. 1990. Distributional ratings, judgment decomposition, and their impact on interrater agreement and rating accuracy. Journal of Applied Psychology 75 (5): 500-505. Jiambalvo, J. and W. Waller. 1984. Decomposition and assessments of audit risk. Auditing: A Journal of Practice & Theory 3 (2): 80-88. Johnson, V.E. and S.E. Kaplan. 1991. Experimental evidence on the effect of accountability on auditor judgments. Auditing: A Journal of Practice & Theory 10 (Supplement): 96-107. Knapp, C.A. and M.C. Knapp. 2001. The effects of experience and explicit fraud risk assessment in detecting fraud with analytical procedures. Accounting, Organizations and Society 26 (January): 25-37. Koonce, L., U. Anderson, and G. Merchant. 1995. Justification of decisions in auditing. Journal of Accounting Research 33 (Autumn): 369-384. Larkin, J., J. McDermott, D.P. Simon, and H.A. Simon. 1980. Expert and novice performance in solving physics problems. Science 208: 1335-1342. Libby, R. and J. Luft. 1993. Determinants of judgment performance in accounting settings: Ability, knowledge, motivation, and environment. Accounting, Organizations and Society 18 (July): 425-451. Libby, R. and K. Trotman. 1993. The review process as a control for differential recall of evidence of auditor judgments. Accounting, Organizations and Society 18 (August): 559574. Libby, R. 1995. The role of knowledge and memory in audit judgment. Judgment and Decision Making Research in Accounting and Auditing. Ashton, R. and Ashton, A. p. 176.

28

Lyness, K.S. and E.T. Cornelius III. 1982. A comparison of holistic and decomposed judgment strategies in a performance rating simulation. Organizational Behavior and Human Performance 29 (February): 21-38. Messier Jr., W.F. and R.M. Tubbs. 1994. Recency effects in belief revision: The impact of audit experience and the review process. Auditing: A Journal of Theory & Practice 13 (Spring): 57-72. Monroe, G.S. and J. Ng. 2000. An examination of order effects in auditors inherent risk assessments. Accounting and Finance 40 (July): 153-186. ODonnell, E. 2002. Evidence of an association between error-specific experience and auditor performance during analytical procedures. Behavioral Research in Accounting 14: 179195. Patel, V.L., G.J. Groen, and C.H. Frederiksen. 1986. Differences between medical students and doctors in memory for clinical cases. Medical Education 20: 3-9. Peecher, M.E. 1996. The influence of auditors justification processes on their decisions: A cognitive model and experimental evidence. Journal of Accounting Research 34 (Spring): 125-140. Peecher, M.E. and I. Solomon. 2001. Theory and experimentation in studies of audit judgment and decisions: Avoiding common research traps. International Journal of Auditing (November): 193-203. Pincus, K.V. 1989. The efficacy of a red flags questionnaire for assessing the possibility of fraud. Accounting, Organizations and Society 14: 153-163. Public Company Accounting Oversight Board (PCAOB). 2004a. An Audit of Internal Control Over Financial Reporting Performed in Conjunction with An Audit of Financial Statements, Auditing Standard No. 2. Washington, DC: PCAOB. Public Company Accounting Oversight Board (PCAOB). 2004b. Audit Documentation, Auditing Standard No. 3. Washington, DC: PCAOB. Public Company Accounting Oversight Board (PCAOB). 2007. An Audit of Internal Control Over Financial Reporting That Is Integrated With An Audit Of Financial Statements, Auditing Standard No. 5. Washington, DC: PCAOB. Ramsay, R.J. 1994. Senior/manager differences in audit workpaper review performance. Journal of Accounting Research 32 (Spring): 127-135. Ravinder, H.V., D.N. Kleinmuntz, and J.S. Dyer. 1988. The reliability of subjective probabilities obtained through decomposition. Management Science 34 (2): 186-199.

29

Ricchiute, D.N. 1999. The effect of audit seniors decisions on working paper documentation and on partners decisions. Accounting, Organizations and Society 24 (April): 155-171. Shelton, S.W. 1999. The Effect of experience on the use of irrelevant evidence in auditor judgment. The Accounting Review 74 (April): 217-224. Slovic, P., B. Fischhoff, and S. Lichtenstein. 1977. Behavioral decision theory. Annual Review of Psychology 28: 1-39. Sohn, Y.W., and S.M. Doane. 2004. Memory processes of flight situation awareness: interactive roles of working memory capacity, long-term working memory, and expertise. Human Factors 46 (Fall): 461-476. Solomon, I. 1987. Multi-auditor judgment/decision making research. Journal of Accounting Literature 6: 1-25. Tan, H-T. 1995. Effects of expectation, prior involvement, and review awareness on memory for audit evidence and judgment. Journal of Accounting Research 33 (Spring): 113-135. Tan, H-T and K. Trotman. 2003. Reviewers responses to anticipated stylization attempts by preparers of audit workpapers. The Accounting Review 78 (April): 581-605. The Daily News of Los Angeles. 2004. Recruiters returning to college campuses. Wire Service (November 16th). Trotman, K. 1985. The review process and the accuracy of auditor judgments. Journal of Accounting Research 23 (Autumn): 740-752. Wiggins, M. and D. OHare. 1995. Expertise in aeronautical weather-related decision making: A cross-sectional analysis of general aviation pilots. Journal of Experiment Psychology: Applied 1 (4): 305-320.

30

TABLE 1 Preparer Control Weakness Assessments Panel A: Descriptive Statistics and Analysis of Variance Supporting (n=18) Mean (SD) Mean (SD) Mean (SD) Mean (SD) Mean (SD) Mean (SD) 3.89 1.65 3.66 1.78 3.78 1.66 3.16 1.58 4.94 2.67 1.69 1.19 Balanced (n=18) 3.94 1.50 3.72 1.60 4.17 1.75 3.44 1.75 4.44 2.25 1.64 1.06 Component (n=18) 5.61 1.25 5.39 1.85 5.61 1.75 5.16 1.75 6.28 1.67 2.89 1.24 F p Statistic Valueb 7.930 .001

Variablea Mean Assessment of Four Control Weaknesses Managements Attitude Toward Overriding Controls Degree of Management Oversight of Control Structure Controls to Safeguard Assets

5.640

.006

5.640

.006

7.309

.002

Segregation of Duties Quality of Preparer Assessments

3.233 6.602

.048 .003

Panel B: Contrast Tests Between Groups

Variablea

Supporting vs. Balanced t-statistic p-valueb t-statistic p-value t-statistic p-value t-statistic p-value t-statistic p-value t-statistic p-value -0.113 .910 -0.095 .924 -0.676 .502 -0.490 .626 0.671 .505 0.143 .887

Supporting Balanced Supporting & vs. vs. Balanced vs. Component Component Component -3.504 < .001 -2.955 .003 -3.187 .001 -3.529 < .001 -1.789 .040 -3.073 .003 -3.391 < .001 -2.860 .003 -2.511 .008 -3.039 .002 -2.460 .009 -3.216 .002 -3.981 < .001 -3.357 < .001 -3.290 .001 -3.792 < .001 -2.453 .008 -3.631 < .001

Mean Assessment of Four Control Weaknesses Managements Attitude Toward Overriding Controls Degree of Management Oversight of Control Structure Controls to Safeguard Assets Segregation of Duties

Quality of Preparer Assessments

31

Auditors were asked to assess the likelihood that each fraud risk factor was likely or unlikely to be a problem area with regard to the clients control environment. Assessments were made on 10-point scales, ranging from highly unlikely to be a problem area (coded as 10) to highly likely to be a problem area (coded as 1). Thus, lower scores indicate control environment weaknesses. Mean Assessment represents participants mean responses for the four fraud risk factors (Overriding Controls, Oversight of Control Structure, Controls to Safeguard Assets, and Segregation of Duties). Quality of Preparer Assessments is computed as the absolute deviation of participants assessments from expert assessments of the four control weaknesses. b All tests are two-tailed except those regarding contrast tests for supporting vs. component and balanced vs. component, which are one-tailed due to the directional nature of expectations.

32

______________________________________________________________________________ TABLE 2 Regression Results DIFFi = a0 + a1TSE + a2COM + a3TSE*COM + e Expected Sign a0 N/A TSE N/A COM N/A TSE*COM (H2) + _____________________
a

b N/A .13 -.29 .88

t-statistic -1.34 0.96 -0.81 2.49

p-valuea .187 .344 .424 .008

One-tailed p-values are reported where the expected sign is unidirectional.

DIFF =

measured as preparers minus reviewers mean assessment for the four control environment weaknesses; reviewers task-specific experience; reviewers indicated their experience reviewing evaluations of the effectiveness of a clients control environment. Responses were made on a 9-point scale, with endpoints labeled very extensive experience (coded as 9) and very limited experience (coded as 1); coded 1 for component documentation structure, 0 otherwise; interaction between reviewer task-specific experience and supporting documentation structure;

TSE =

COM = TSE*COM =

33

You might also like