You are on page 1of 10

Proceedings of the 29th Annual Hawaii International

Conference on System Sciences -

Factors Influencing Risks and Outcomes


End-User Development

1996

in

Joline Momson
Department of MIS
University of Wisconsin, Eau Claire
Eau Claire, WI 54702

Diane Janvrin
Department of Accounting
University of Iowa
Iowa City, IA 52242

development times. The structure of the paper is as


follows: Section 2 describes related work that forms
the basis for our initial research model. Section 3
describes
our
structured
spreadsheet
design
and
Section
4 describes
our
methodology,
Section
5 presents
experimental
studies.
conclusions,
limitations,
and
future
research

Abstract
End-user developed applications introduce many
control risks into organizations. Literature suggests that
influencing factors include developer experience, design
approach, application type, problem complexity, time
pressure, and presence or absence of reviav procedures.
This research explores the impacts of diffeTent design
approaches through two field experiments evaluating the
use versus non-use of a structured design methodology
when developing complex spreadsheets. Our results
indicated that subjects using the methodology showed a
significant reduction in the number of linking errors,
i.e., mistakes in creating links between values that must
pow from one area of the spreadsheet to another or from
one worksheet to another in a common workbook. We
also observed that factors such as gender, application
influenced
expertise, and Workgroup configuration
spreadsheet error rates as well.

directions.

2. Background

and Model Development

Cotterman
& Kumar
[9] classify
end-user
computing
risks
according
to
three
major
dimensions:
end-user
development,
end-user
operation,
and end-user control of information
systems.
To our knowledge,
no work
has
specifically identified and attempted to empirically
validate factors impacting
outcomes in end-user
developed applications.
Literature suggests that
potential
factors include
developer
attributes,
development
approach, developer
configuration,
problem/process
characteristics
and application
domain. The following sections explore these areas
and identify measures for predicting outcomes.

1. Introduction
During the past decade, the use of end-user
especially microcomputer
developed applications,
However,
spreadsheets, has grown exponentially.
when identifying
and evaluating
risks within
developed
computer
end-user
organizations,
Research has
applications
are often overlooked.
revealed that as many as 44% of all end-user
spreadsheets contain at least one error [5]. Such
errors can have catastrophic results. For example,
a Florida based construction company lost $254,000
by relying on an incorrect spreadsheet analysis for
project costing [34].
Previous research suggests that errors in enduser developed applications may be influenced by
several factors. The purpose of this research is to
of different
design
investigate
the impacts
error
rates and
approaches
on spreadsheet

Developer Attributes
Cotterman and I&mar [9] define an end-user as
unit or person who has an
my 0% anizational
interaction
with the information
system
as a
producer or consumer of information.
Our work
focuses on end-users who are both producers and
consumers of information.
Several studies have
indicated that end-users are diverse, and that this
diversity
leads to a need for differentiated
education, training, support, software tools and risk
assessment (e.g., [14,19, 301).
Harrison
& Rainer [17] assert that specific
attributes
include
computer
gender,
age,
experience, computer confidence, math anxiety, and
cognitive style. Curtis et. al.s field studies on

346
1060-3425/96 $5.00 0 1996 IEEE

Proceedings of the 1996 Hawaii International Conference on System Sciences (HICSS-29)


1060-3425/96 $10.00 1996 IEEE

Proceedings of the 29th Annual Hawaii International


software
development
(e.g., [lo])
repeatedly
confirm
the importance
of understanding
the
Adelson
& Soloway
[l]
application
domain.
emphasize the importance of expertise in software
design. Frank, Shamir & Briggs [15] found that PC
user knowledge
- i.e., application
expertise impacted
security-related
behavior.
positively
Further, several researchers indicate
that user
training and relevant experience are important
factors (e.g., [2, 331).

Conference on System Sciences -

1996

Halverson
[28]
note
that
the
increased
communication
and coordination
costs of working
in groups may outweigh the benefits of reduced
error rates.
Problem/Process Characteristics
Several
the
difficulties
studies
identify
versus non-complex
associated
with
complex
programming
problems (e.g., [13, 321). Studies on
the effect of time pressure
on programming
activities indicate that due to time pressure, enduser developers may not spend sufficient time on
problem definition
and diagnosis [2, 61. These
researchers also noted an absence of formal review
procedures
within
end-user
validation
and
developed applications.

Development Approach
activities
for
end-user
developed
Design
applications tend to be unstructured and often nonSalchenberger [33] notes that endl-users
existent.
tend to develop and implement
models without
analysis
or
requirements
first
performing
developing a system design. Brown & Gould [5]
observed that their participants
spent little time
Pl arming before launching into spreadsheet coding.
Ronen, Palley and Lucas [31] suggest that s#everal
attributes of spreadsheets (i.e., non-professional
developers, shorter development life cycles, ease of
modification) tend to preclude a formal sprt?adsheet
analysis and design process.
Almost
all published
spreadsheet
(design
approaches agree on one principle:
the importance
of keeping spreadsheet work areas small in order to
make errors easier to find, limit the damaging
effects of errors, and make spreadsheets easier to
understand and maintain.
Researchers have noted
that structured design methodologies
increase the
accuracy and reliability of information systems (e.g.,
[241). Salchenberger [33] proposes the use of
structured
design methodologies
in end-user
development
and discusses their effectiveness in
two situations. Cheney, Mann & Amoroso [17] also
design
use of
a structured
the
support
methodology; they found that the more structured
the tasks being performed by the end-user, the more
likely the success of the application.
Ronen, Palley
and Lucas [31] describe a structured spreadsheet
design approach based on modifying
conventional
data Bow diagram symbols to create spreadsheet
Several practitioner-directed
flow
diagrams.
articles encourage the use of structured
design
spreadsheet
planning
for
and
approaches
development.

Application
Domain
Research on end-user development
generally
falls into one of two streams: general application
spreadsheet
development.
development
and
(Notable exceptions include Frank, Shamir & Briggs
[15] and Batra [3]). Par&o and Halverson [28]
appropriately
summarize the state of spreadsheet
research as follows:
ln the past, spreadsheeting has been the Rodney
Although
Dangerfield of computer applications.
everyone agrees that it is extremely important, fm
researchers have studied it in any depth. Given its
potential for harm as well as for good, this situation
needs to change.
Psychologists Hendry and Green [20] assert that
spreadsheets
have
the
ability
to
simplify
programming
by suppressing
the complexities
associated with conceptually simple operations like
adding a list of numbers or handling input and
output.
As a result, users tend to ignore
spreadsheet weaknesses such as the potential for
experienced
when
errors
or the difficulties
debugging or reusing spreadsheets.
Outcomes
Salchenberger [33] suggests that factors to use in
of end-user
developed
judging
the quality
use,
applications
include
reliability,
ease of
maintainability,
and auditability.
Reliability can bc
defined as the absence of logical and mathematical
errors in a model.
One of the earliest studies on
spreadsheet outcomes (conducted by Brown and
Gould [5] in 1987), found that of 27 spreadsheets
created by experienced
spreadsheet users, 41%
contained user-generated errors. In another study,

Developer Configuration
Par&o Rr Halverson [28] and Nardi & Miller [26]
both note that the accuracy of end-user developed
applications may be increased by having multiple
collaborate.
However,
Panko
&
developers

347

Proceedings of the 1996 Hawaii International Conference on System Sciences (HICSS-29)


1060-3425/96 $10.00 1996 IEEE

Proceedings of the 29th Annual Hawaii International

Developer Attributes

/ *Age
Gender
* Computer Confidence
* Domam Expertise
Application Expertise
* Development Expertise
- Math Anxiety
- Cognitive Style
- Training
* Experience
\.

-x

Application
I

3. Structured Spreadsheet Design


Modem spreadsheet applications enable users to
easily modularize
large spreadsheets by creating
worksheets
(individual
spreadsheet work areas)
within
a larger
workbook
(set of related
worksheets).
Users can link values from one
worksheet to another within the same workbook, or
link sheets among
multiple
workbooks.
This
technique readily supports scenario analysis, For
example, all of the schedules leading to a master
budget can be prepared on separate worksheets
within a workbook, and then linked together so that
the impact of a change in one parameter can be
evaluated
keystroke.
quickly
with
a single
However, this feature also leads to the likelihood of
linking errors. Data on a source worksheet may
be linked to the wrong cell on the target worksheet,
or a value on a target worksheet that should have
been linked from a previous worksheet is hardcoded: an actual value from a previous scenario is
directly typed in. When the value should change
for subsequent scenarios, the values for omitted or
incorrect links do not change as they should, and
errors result.
To support the task of designing
linked worksheets,
we build upon the work of
Ronen, Palley and Lucas [31] to develop a design
methodology based on data flow diagram symbols.

Type

i
End User-Developed
Application Outcomes
-i

Reliability
Ease of Use
Maintainability
Auditability

cost

-+
_+sSS--I
/

l-----l
* Problem Complexity
l

ProblcmiProcess
characteristics

,r

- Existence of
Kevlew Procedures

Development Approach

Figure 1. Initial Research

1996

defines maintainability
as the ability to change or
update the system as needs change, and auditability
as the ability to determine the source of data values
in the model. Few researchers have evaluated endfor
user
developed
applications
either
maintainability
or auditability.
We propose the
inclusion of cost (usually measured in terms of
development time) as another important end-user
development outcome.
Based on prior theoretical and empirical work in
this area, we have developed an initial research
framework
(Figure 1) that summ arizes end-user
development risk factors and their relationships to
potential outcomes. We now focus on determining
the impacts
of
development
approach
on
The particular
spreadsheet reliability
and cost.
structured design methodology used in our studies
is now described.

Davies & &in [ll] audited a sample of spreadsheets


within an organization
and discovered 4 of 19
worksheets reviewed contained major errors and 13
rank0 &
of 19 had inadequate documentation.
Halverson
[29] asked students
to develop
a
spreadsheet model working individually,
in groups
of two, and in groups of four, and observed total
error rates ranging from 53% - 80%. They also
noted that very little time (only a few minutes) was
spent in design prior to implementation.
Panko and Halverson
[29] note that their
participants made several different types of errors.
At the highest level, errors could be divided into
two types: oversight errors and conceptual errors.
They define oversight errors as dumb errors such
as typographical errors and misreading numbers on
In contrast, conceptual
the problem statement.
errors involve errors in domain knowledge
or
general modeling logic. Par&o and Halverson [28]
use Lorge and Solomons
[23] terminology
to
further divide conceptual errors into Eureka and
Once an Eureka error is
non-Eureka errors.
identified, everyone can quickly agree that the error
exists. However, non-Eureka errors are errors in
which the incorrectness of the action cannot be
easily demonstrated.
Generally, researchers evaluate the usability of
with
user
applications
end-user
developed
satisfaction survey tools 114, 221. Salchenberger [33]
/

Conference on System Sciences -

Framework

Figure

348

Proceedings of the 1996 Hawaii International Conference on System Sciences (HICSS-29)


1060-3425/96 $10.00 1996 IEEE

External

2. Basic

Modeling

Symbols

Proceedings of the 29th Annual Hawaii International

Conference on System Sciences -

1996

Homebuilders
Income
Statement

Market
pgi--pshaE

\_,/

Figure 3. Example
Manufacturer

Design

Model : Master Budget

for Small

Data flow diagrams (DFDs) are documentation


used
by
systems
analysts
to show
tools
organizational data sources, destinations, processes,
and data inputs and outputs.
Our basic modeling
symbols (Figure 2) are similar to those us:ed in
standard DFD methodologies
(e.g., Gane & Sarson
Ml)- These include an external data source
(depicted by a rectangle), and data flows (shown by
labeled
directed
arrows).
Worksheets
are
analogous to processes, and are shown as labeled
circles. Data flows depict data inputs from external
sources to worksheets,
and data that is llinked
among related worksheets.
Figure 3 shows an example design model.
Worksheet pages are created to represent each of
the modules shown on the design model (e.g., Input
Page, Sales Budget, Production Budget, Pro Forma
income Statement).
Inputs from external sources
are entered on a centralized Input Page, and1 then
flow into the Sales Budget,
where they are
transformed into data items needed for calculations
on the other schedules. Using named variables, the
Inputs are easily
specified links are created.
changed on the lnput Page for scenario analysis.

4. Evaluation

Home Fixture

between
developer
confidence
and improved
outcomes was noted by Harrison & Rainer [17].
Prior research has also noted a positive relationship
between domain and application
expertise and
improved outcomes (e.g., [2, 61).
Structured
systems design approaches
(e.g.
Constantine and Yourdon [S]), have successfully
been used in systems development for over twentyfive years.
This approach can be applied to
spreadsheet
design
by
identifying
different
worksheets and their corresponding
data inflows
and outflows.
Thus, each individual
worksheet
effectively
becomes a black box that takes a
defined set of inputs and translates it into a
prescribed set of outputs. Since the worksheet links
are explicitly
defined in the design, the result
should be a decreased error rate for linking errors.
Study #l
Subjects, Treatment, and Measures.
Sixty-one
upper- and masters-level accounting and business
administration
majors in an accounting information
systems course at a large public university were
assigned to the treatment (structured design) and
control (ad hoc design) groups based upon course
sections. Assignment
of treatments
to specific
sections was done randomly.
All subjects were
given one hour of formal training in spreadsheet
development,
and then instructed
to develop a
single spreadsheet workbook containing a series of
linked worksheets using Microsoft
Excel.
The
assigned
problem
contained
ten
separate
worksheets, with a total of fifty-one linked data
values among the worksheets. A paper template of

Studies

The basic research model guiding both studlies is


shown in Figure 4. (The numbers following
the
variable
names indicate
if the variable
was
considered in Study #l, Study #2, or both). We
hypothesized
that both developer attributes and
development approach would influence error rate
A positive relationship
and development
time.

349

Proceedings of the 1996 Hawaii International Conference on System Sciences (HICSS-29)


1060-3425/96 $10.00 1996 IEEE

Proceedings of the 29th Annual Hawaii International

ard Deviations

fStudv

Conference on System Sciences -

#I 1

Gender (1 =Male, 2=Female)


Age Group(l=20-25,
2=26-30, 3=31-40,4=40-50)

j
/

I.441
1.38 /

0.50,
0.78:

1.50
1.56

Class Level I1 =Jr.. 2=Sr.. 3=Grad)


'Pretest SS Expertise
'Pretest Excel Expertise

I
;
1

l.QO/
2.491
2.341

0.68;
1.07;
1.14~

2.161

0.99
0.73
0.94

-FiT ._.--.

Eraertise-.-~
-. - -_r*Pretest Design Confidence

t
t
~

0.89:

(
0.981
(I.58

2.11,
2.61 ~ 1.20

1.301
1.821
2.431

1.02

2.68

2.221

1.00

2.41 /

1.19

2.42

1.89
3.721
2.87 j

0.831
1.89

2.301
4.02 1

1.021
0.63 1

2.261
3.841

I.001
0.781

0.671
0.69

1.191
1.97

0.541
0.60 /

1.w
1.83

0.75

1.053'

2.30

1 .OQ

I.231

2.27,

1.05

2 07
4.031

0 98

U.Y41

0.67

2.73 j
2.931
2_40

4.85
9.51%/
TxZy

'Mean Number of Linking ErrorsNVorkbook

' l=Novice, 5=Expert)


2 I =Very Unconfident, 5=Very Confident

1996

t
i

3 Total Possible Links = 51

N = 66
"p <= "1; **p<

=.05

The MANOVA
analysis results imply that for
subjects working
alone, the use of the design
significant
had
a
impact.
methodology
Unfortunately,
the advantage
of using
the

structured
design methodology
was weakened
Two
when
the subjects were working in pairs.
potential explanations emerge: (1) subjects working
in pairs gained other advantages that offset the

350

Proceedings of the 1996 Hawaii International Conference on System Sciences (HICSS-29)


1060-3425/96 $10.00 1996 IEEE

Proceedings of the 29th Annual Hawaii International

Domain Expertise (2)

Development
Approach

r* Ad hoc design
(I, 2)
l Saucwred
design
(I. 2)
Figure 4. Model

Reliability
-Link Errors (I, 2)
-Conceptual
Enors (2)
-Oversight
Errors (2)
cost

-Development
Time (1, 2)
* Dev. Approach
Usability (2)

for Experimental

1996

results sorted by configuration and treatment group


are shown in Table 1. Analysis was done on a persubject basis, with scores for time and error rate
replicated for each subject in a given dyad.
We used a least squares multiple regression to
test the research model components and proposed
linkages.
Examination
of the correlations among
the independent variables revealed no evidence of
multicollinearity:
that is, rs <= .8 [4]. Table 2
shows that gender, age group, class level, and
pretest Excel expertise had no impact on any of the
while
configuration
(singles versus
outcomes,
pairs), treatment group (structured versus ad hoc
design), and pre-test spreadsheet
expertise all
displayed significant correlations with one or more
of the dependent variables.
Following Heise [18], the independent variables
that did not explain a significant variation in the
outcomes were excluded from further analysis.
Table 3 shows that the variations in the dependent
variables are explained to some degree by the
independent
variables
in the reduced model.
MANOVA
analysis was used to detect potential
interactions
between
the
configuration
and
treatment variables on error rate. This revealed a
weak significant difference (p = .lO) between the
control to the treatment group, as well as a weak
significant
interaction
(p = .lO) among
the
configurations and the treatment.
Summary and Discussion.
Overall, subjects
using the structured
design technique exhibited
of the fifty-one
significantly
lower error rates:
possible links in the spreadsheet, the structured
design subjects had an average of 3.57 (7%) linking
errors, while the ad hoc design subjects averaged
4.1 (10%) linking errors.
We believe this was
because the design methodology
forced
the
developers to explicitly identify the links between
This explicit identification
was then
worksheets.
carried over into the implementation.
Virtually all
of the linking errors were due to omitted links
(where a value had been hard coded instead of
linked) rather than incorrect links. This may have
been because subjects were relying heavily on the
check figures provided for the first scenario instead
of examining the reasonableness of their answers
for the subsequent cases.
This improvement
in linking
errors was
subjects
mitigated
by subject configuration:
working
alone within the ad hoc design group
averaged 7.44 linking errors per spreadsheet (14%),
while subjects working alone in the design group
and both sets of paired subjects all averaged
approximately
3.8 linking errors per spreadsheet
(7%).

Outcomes
l

Conference on System Sciences -

Studies.

the worksheets with one complete data set (from


[25]) was given to all subjects to provide check
figures and mitigate non-linking
errors.
(only
numerical values were specified; the subjects still
had to develop and correctly
implement
the
underlying design in order to successfully perform
AU subjects were required to
scenario analysis).
maintain logs recording time spent on design and
The treatment group was
development activities.
required to submit their spreadsheet design prior to
being assigned a password to enable computer data
entry.
A pre-test questionnaire
was administered
to
identify gender, age, and class level, as well as selfreported pre-test application expertise (in terms of
both spreadsheets in general as well as the specific
spreadsheet application,
Microsoft
Excel), DFD
development
expertise, and design confidence.
Development
time was determined by combining
(1) self-reported
design logs
two measures:
covering two- or three-day periods during the
experiment; and (2) system-recorded
measures of
The subjects were
actual implementation
times.
given sixteen days to complete the assignment.
At
the end of that period,
questionnaires
were
administered to ascertain post-test spreadsheet and
Excel expertise and design and coding confidence.
Error rates were determined
by examining
the
spreadsheet data files.
Due to an unexpected
Analysis and Results.
we were forced to allow
software
constraint,
subjects in Study #l to work either individually
or
in self-selected pairs within treatment groups. (In
Study #2 all subjects worked alone). Means and
standard
deviations
for the independent
and
dependent variables for all subjects as well as

351

Proceedings of the 1996 Hawaii International Conference on System Sciences (HICSS-29)


1060-3425/96 $10.00 1996 IEEE

Proceedings of the 29th Annual Hawaii International

Conference on System Sciences -

1996

disagreed
with
statements
they
agreed
or
the DFD
design
regarding:
(1) whether
methodology
helped
them
develop
a better
spreadsheet; (2) if using the methodology made the
development
effort take longer; and (3) whether
they would use the methodology the next time they
developed a complex spreadsheet.

impact of the design approach; or (2.) another


external factor (or factors) induced some subjects to
elect to work in pairs, and this factor also enabled
them to create spreadsheets with fewer linking
errors.
Our results also indicated that subjects working
in pairs reported
higher post-test
spreadsheet
expertise but lower design confidence.
The latter
with
literature
inconsistent
seemed
result
that
collaboration
usually
leads to
suggesting
The significantly
lower overall
greater confidence.
time investment for the structured design subjects
was unexpected but encouraging.
(This result does
not consider the time spent learning
the design
Finally,
as
methodology
initially,
however).
expected, pre-test spreadsheet
expertise was a
of post-test
spreadsheet
significant
predictor
expertise, reduced error rates, and reduced time
The results of this study suggested
investments.
that developer attributes,
configuration
and the
development
approach are all potential
factors
outcomes.
development
impacting
end-user
However, the unexpected results for time and posttest confidence
as well
as the unexpected
configuration
variable induced us to perform a
procedures
and
with
revised
second study
measures.

Analysis and Results.


Table 4 reports overall
means and standard deviations for all measures.
Correlation
analysis indicated
a high degree of
correlation
between pretest Excel expertise and
overall
spreadsheet
expertise;
since the Excel
expertise measure had the highest correlation with
the dependent variables, it was included in the
regression analysis, and the pretest spreadsheet
expertise measure was omitted. Table 5 shows the
for
all variables.
analysis
initial
regression
Significant results were attained for gender, pretest
Excel expertise,
DFD
expertise,
and design
confidence.
A reduced model analysis performed
the non-significant
independent
bY omitting
variables yielded essentially the same results. Table
6 shows the regression analysis results for the
design methodology
attitudinal measures obtained
from the treatment group subjects.
Summary and Discussion. Table 5 suggests that
our independent variables accounted for significant
differences in all of the dependent measures except
number of conceptual errors and overall time spent
An interesting
gender
on design and coding.
difference surfaced: females reported significantly
(p<=.O5) less perceived
posttest
spreadsheet
expertise
and confidence that their spreadsheets
were error-free.
This concurs with Harrison &
Rainers [17] finding that males tend to have more
experience with computers and higher confidence
levels.
Interestingly,
the females
also had
significantly
(p<=.O5) fewer oversight errors. We
postulated that this may have been a result of their
perhaps the females
lower confidence levels:
reviewed their spreadsheets more and caught their
oversight errors.
Subjects in higher class levels (i.e., seniors and
masters students) also displayed less confidence
that their spreadsheets were error free. There were
no notable correlations between class level and any
of the other independent variables; in fact, subjects
in the higher levels seemed in general to have less
pre-test application expertise.
We attributed this
result to the inexperience (and often misplaced?)
overconfidence of youth.

Study #2
The problem used in Study #2 was slightly more
complex than the one used in Study #l,. with ten
different worksheets and a total of sixty-six linked
cells. All
subjects
were
required
to work
individually
so as to increase sample size and allow
a more isolated investigation
of the impacts of the
design methodology
and developer
attributes.
Error types observed were expanded to include
conceptual and oversight errors as well as linking
errors; a blank template (from Hilton [2I], p. 439443) with only a single check figure was given.
Explicit post-test measures were introduced
to
assess design confidence, coding confidence, and
confidence that the spreadsheet was error-free.
Additionally,
a measure was included to explore
the impact of potential domain expertise:
subjects
who
were accounting
majors
had additional
experience and coursework in preparing financial
reports and other master budgeting topics than the
non-accounting majors.
Subjects in the treatment
group were also asked to rate their post.-treatment
confidence,
and to rate their
DFD development
the development
approach.
satisfaction
with
Specifically, they were asked the degree to which

352

Proceedings of the 1996 Hawaii International Conference on System Sciences (HICSS-29)


1060-3425/96 $10.00 1996 IEEE

Proceedings of the 29th Annual Hawaii International

Table 5. Regression

Analysis

Results

Conference on System Sciences -

1996

(Study #2)

0.57

0.38

1.02

0.24

o:ss

2.72

0.17

0.08

2.52

-0.12

-0.19

0.29

0.58

-0.15

-5.75

0.06

0.00

-5.63

1.03

Pre. Excel Expertise

0.30

0.29

0.30

0.43

0.06

-3.18

-0.07

-0.15

-3.22

-1 .ll

Pre. DFD

0.14

0.16

0.27

0.16

0.04

-0.34

0.47

0.02

-0.23

-0.64

lomein Experience
(l=Other,
reatment Group (l=Control.
Ex~ertiie

2=Acct
2=Treat)

-3.66

Using
the structured
design methodology
significantly
(p<=.Ol) reduced the number of link
errors, but
had no impact on the number of
oversight or conceptual errors. Interestingly, link
errors were by far the most common error made.
Each spreadsheet (for the combined control and
treatment groups) had an average of 9.07 total
It is
errors; of these, 84% were linking errors.
the design
that
using
important
to note
methodology
had no significant impact on total
time spent on the project (aside from the time taken
to learn the methodology initially).

Domain experience was a positive predictor of


expertise
(p-+.05)
and
post-test
spreadsheet
Neither of
posttest design confidence (p<=.Ol).
these findings
were
particularly
surprising.
Domain experience was somewhat correlated to
pre-test spreadsheet expertise (r=.19) as well, so
majors had more prior
apparently the accounting
the
nonthan
expertise
with
spreadsheets
The problem
was fairly
accounting
majors.
complex, with ten different worksheets and a total
of sixty-six links, so it is reasonable to expect that
domain experience would contribute
to d.esign
confidence.

353

Proceedings of the 1996 Hawaii International Conference on System Sciences (HICSS-29)


1060-3425/96 $10.00 1996 IEEE

Proceedings of the 29th Annual Hawaii International


Table 6. Regression
Treatment Attitudinal

1 =Strongly Disagree; 5=Strongly Agree


?=Jr., 2=Sr., J=Grad.
j
1 =Other, 2=Acctg.

Pretest Excel expertise was the most significant


positively
impacting
post-test
predictor,
spreadsheet
expertise
(p<=.Ol),
post-test
Excel
expertise (p<=.Ol), post-test
design confidence
(p<=.O5), and post-test coding confidence (p<=.Ol).
Subjects with higher pre-test Excel expertise also
made fewer conceptual errors (p<=.Ol) (and fewer
overall errors (pc.01). These findings concur with
the results of many studies emphasizing
the
importance of application experience and resulting
expertise (e.g.,[l, 271).
The attitudinal responses summ arized in Table 6
indicated
that females were significantly
less
confident
of their post-test
DFD development
ability.
Pre-test design confidence was positively
correlated to the belief that creating the DFD was
helpful and would be used in the future.. Subjects
with a higher degree of pretest
spreadsheet
expertise did not think that developing the DFD
caused the development
effort to take longer.
Overall, from an attitudinal standpoint, the subjects
were ambivalent to the costs and benefits of the
structured design approach.

Limitations,

1996

inadvertently
that
suggested
developer
configuration
is an important
factor and has
potential
interactions
with
the development
approach.
As mentioned previously, our initial study was
somewhat confounded because the pairs were selfselected.
Our future studies will address the
impacts of collaborating in non-self-selected groups.
Additionally,
our research has focused on absolute
error rates. A limitation
of this approach is the
question of how error rates are correlated to
ultimate outcomes. i.e., the bottom line or poor
decisions resulting from spreadsheet models. Also,
the impact of the structured development approach
on problems of differing complexity needs to be
explored. Another area that needs to be addressed
in the future involves the outcomes associated with
the use of end-user developed spreadsheets by,
other
users
in
terms
of
usability,
end
maintainability,
and audibility.
Additionally,
it will
be interesting to explore the results when end-user
developers know that their spreadsheets will be
reviewed
or audited.
Finally, we have only
considered spreadsheets; we need to determine if
our overall causal model is applicable to other enduser development domains as well.
As
development
increases
in
end-user
organizations
the identification
of strategies to
minimize risks and improve outcomes is becoming
increasingly important.
This research has made an
important contribution both by identifying end-user
development
risk factors and outcomes, and by
developing and empirically
evaluating the impacts
of a structured
design approach for spreadsheet
development.

Analysis Results for


Measures (Study #2)

;;<=.05;pc=.Ol; pc=.OO1 1

5. Conclusions,
Research

Conference on System Sciences -

References
Adelson, 13. and Soloway, E., The role of
domain experience in software design, IEEE
Trans. on Swftware Eng., SE-11 (Nov. 1985), 13571360.
Alavi,
M. & Weiss, I., Managing
the risks
associated Twith end-user computing, J. MIS 2,3
(Winter 1986), 413426.
for studying human
Batra, D., A framework
error
behavior
in
conceptual
database
modeling, Info. & Management 25 (Sept. 1993),
121-131.
Billings, R.S. and Wroten, S.P., Use of path
analysis
in
industrial/organizational
psychology:
Criticisms
and suggestions, 1.
Applied Psych. 63 (June 1978), 677-688.

and Future

confirnned
Our
experimental
studies
that
developer
attributes,
specifically
gender
and
domain and application
expertise,
significantly
impacted error rates. Additionally,
we have shown
that using the structured
design methodology
significantly improved reliability
for two different
linked spreadsheet applications.
Study #I also

354

Proceedings of the 1996 Hawaii International Conference on System Sciences (HICSS-29)


1060-3425/96 $10.00 1996 IEEE

Proceedings of the 29th Annual Hawaii International


5.

6.

10.

11.
12.

13.

14.

15.

16.

17.

18.

19.

20.

Brown, P.S. & Gould, J. D., An expenmental


study of people creating spreadsheets,ACM
Trans. on ofice Irzfo. Systems 5 (July 1987), 258272.
Cale, E., Quality issues for end-user developed
software, I. Sysfems Management (Jan. 1994, 3639.
Cheney, I?., Mann, R. and Amoroso,
D.,
Organizational
factors affecting the success of
end-user computing, 7. MIS 3,Z (Summer 1986),
65-80.
Constantine, L.L. and Yourdon, E. Structured
Design. Prentice-Hall, Englewood Cliffs, New
Jersey, 1979.
Cotterman, W. & Kumar, K., User cube: a
taxonomy of end users, C. ACM 32, 11. (Nov.
1989), 1313-1320.
Curtis, B., Krasner, H. and Iscoe, N., A field
study of the software design process for large
systems. C. ACM 31, 11 (Nov. 1988), 1268-1287.
Davies, N. & Ikin, C., Auditing
spreadsheets,
Australian Accountant @ec. 1987), 54-56.
Davis, F., Perceived usefulness, perceived ease
of use, and user acceptance of information
technology, MIS Quarterly 13,3 (Sept. 1989)
318339.
DeRemer, F. & Kron, H., Programmin g-in-thelarge versus programming-in-the-small,
IEEE
Transactions on Software Engineering SE-2, (June
1976), 80-86.
Doll, W. & Torkzadeh,
G., A discrepancy
model of end-user computing
involvement,
Management Sci. 35, 10 (October 1989), 11511171.
Frank, J., Shamir, B. and Briggs, W., Securityrelated behavior of PC users in organizations,
Info. 6 Mgrnt. 21 (1991), 127-135.
Gane, C. and Sarson, T.
Structured Systems
Analysis. Prentice-Hall, Englewood Cliffs, N.J.,
1979.
Harrison, A. & Rainer, R.K. Jr., The influence
of individual
differences on skill in end-user
computing, 1. MIS 9,1, (Summer 1992), 93-111.
Heise, D.R., Problems in path analysis and
in E.F. Borgatta
(Ed.),
causal inference,
Sociological methodology, (38-73), Jossey-Bass,
San Francisco, 1969.
Henderson, J. Pr Treaty, M., Managing
enduser computing
for competitive
advantage,
Sloan Mgmt. Review (Winter 1986) 3-74.
Hendry, D.G. Rr Green, T.R.G., Creating and

Conference on System Sciences - 1996


comprehending and explaining spreadsheets: a
cognitive interpretation
of what discretionary
users think of the spreadsheet model, Int. 1. of
Human-Computer Studies 40 (199-l), 1033-1065.
21. Hilton, R. Managerial Accounting (2nd Ed.).
McGraw Hill, New York, 1994,439440.
22. Igbaria, M. and Parasuraman, S., A Path
Analytic Study of Individual
Characteristics,
Computer
Anxiety
and Attitudes
toward
Microcomputers,
1. of Mgmt 13,3 (1989), 373387.
23. Lorge, 1. & Solomon, H., Two models of group
behavior in the solution of Eureka - type
problems, Psychometrika, 20(2), 1955 (cited in

VW.

24. Markus, L. &L Keil, M., If we build it, they will


come:
designing information
systems that
people want to use, Sloan Mgmt. Review
(Summer 1994), 11-25.
25. Moriarity, S. Rr Allen, C.P. Cost Accounting (3rd
Edition). John Wiley Rr Sons, New York, 1991,
285-291.
26. Nardi, B. &I Miller, J., Twinkling
lights and
nested loops: distributed problem solving and
spreadsheet development, lnt. 1. Man-Machine
Studies 24 (1991), 161-184.
27. Ngwenyama,
O.,
Developing
end-users
systems development
competence, Info. &
Management 25 (Dec. I993), 291-302.
28. Par&o, R.R. & Halverson, R. P., Individual and
group spreadsheet design: Patterns of Errors,
in Proc. of the 27th HICSS, Vol. IV, 4-10, IEEE
Computer Society Press, 1994.
29. Panko, RX. & Halverson, R.P., Patterns of
spreadsheet
errors
in
development,
unpublished
working
paper, University
of
Hawaii, 1995.
30. Rockart, J. & Flannery, L., The management of
end user computing C. ACM 26,lO (Oct. 1983),
776-784.
31. Ronen, 8.; Palley, M.A.
& Lucas, H.C.,
Spreadsheet analysis and design, C. ACM 32,
1 (Jan. 1989), 84-93.
32. Ross, D. & Schoman, K., Structured analysis
for requirements
definition,
IEEE Trans.
Sofhvare Eng. SE-3,l (Jan. 1977), 6-15.
development
33. Salchenberger,
L., Structured
techniques for user-developed systems, Info. &
Mgmt. 24 (Jan. 1993), 41-50.
34. Simkin, M. G., How to validate spreadsherts,
J. of Accountancy 164,5 (Nov. 1987), 130-138.

355

Proceedings of the 1996 Hawaii International Conference on System Sciences (HICSS-29)


1060-3425/96 $10.00 1996 IEEE

You might also like