You are on page 1of 12

Journal of Occupational Health Psychology

1998, Vol. 3, No. 4, 390-401

Copyright 1998 by the Educational Publishing Foundation


1076-8998/98^3.00

Measuring Job Stressors and Studying the Health Impact of the Work
Environment: An Epidemiologic Commentary

This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

Stanislav V. Kasl
Yale University School of Medicine
This article provides a commentary on 5 articles in the special section that marshal a substantial
amount of information about 4 instruments for measuring work stress. The perspective is that of
psychosocial epidemiology and highlights the differences between the environmental and the
psychological traditions of studying stress and health. Several issues are addressed: (a) placing the
4 measures in a broader taxonomy of dimensions of work environment and evaluating the measures in that context, (b) discussing alternative strategies for measuring job strains, (c) analyzing
some of the issues in the triviality debate, and (d) reconsidering a number of issues in the ongoing
debate about "subjective" versus "objective" measurement approaches to work dimensions.

I have been asked to provide a commentary, from


the perspective of an epidemiologist, on the five
invited articles in this issue that deal with the
measurement of work stress. This is a relatively
open-ended mandate, including the perspective itself:
It allows me to borrow the best from epidemiology
but also to abandon epidemiologic wisdom and
tradition and become a social scientist when the
epidemiologic perspective is insufficient or misleading. Four of the articles are accounts of instruments
that the authors have developed and used: The Job
Stress Survey (JSS; Vagg & Spielberger, 1998) and
the Pressure Management Indicator (PMI; Williams
& Cooper, 1998) are relatively new instruments,
whereas the Job Content Questionnaire (JCQ; Karasek
et al., 1998) and the three stress scales of Spector and
Jex (1998) are more established instruments and the
authors review some of the accumulated evidence.
The fifth article (Hurrell, Nelson, & Simmons, 1998)
is an excellent overview of work stress measures
available and an incisive discussion of important
issues that look both at the past and at the future.
In some ways, the five articles need no commentary. The four instrument articles deliver admirably on
the primary objective of letting the reader-researcher
become much more familiar with these instruments;
they compile highly useful information in one place
and thus allow the researcher a much more informed
choice for his or her own research planning than is
normally the case. And the fifth article, being a

Correspondence concerning this article should be addressed to Stanislav V. Kasl, Department of Epidemiology
and Public Health, Yale University School of Medicine,
New Haven, Connecticut 06520-8034. Electronic mail may
be sent to stanislav.kasl@yale.edu.

high-quality commentary on the field, hardly needs


another layer of comments from me.
Even though the emphasis in the four articles is on
measurement, one needs to be reminded that there are
four components of adequate methodology: (a) the
adequacy of the design; (b) the adequacy of the
conceptualization and operationalization of the study
variables; (c) the adequacy of data analysis, both from
the perspective of ruling out alternative explanations
and from the perspective of testing a particular
etiological or conceptual model; and (d) the adequacy
of the theoretical formulation regarding the etiological dynamics linking exposure to health outcomes.
The four articles obviously focus on measurement,
whereas the Hurrell et al. article includes measurement and data analysis. The point is that a particular
operationalization of work stress does not produce a
set of invariant biases and limitations that can be
adequately discussed in isolation from other methodological issues. Rather, a variable set of biases and
limitations may be applicable, depending on the study
design used, the procedures for measuring the
outcome variables, and the availability of additional
variables for statistical control of rival hypotheses.
Therefore, some of my comments will go beyond
purely measurement issues.

The Perspective of Psychosocial Epidemiology


In their introductory chapter to their edited book
Measuring Stress, Cohen, Kessler, and Gordon
(1995) discussed three broad traditions or perspectives for studying stress as a risk factor for disease:
environmental, psychological, and biological. The
environmental tradition is described as one that
"focuses on assessment of environmental events or

390

391

SPECIAL SECTION: AN EPIDEMIOLOGIC COMMENTARY

experiences that are normatively (objectively) associ-

host characteristics;

ated with substantial adaptive demands"

(p. 3).

mechanisms in disease induction; (e) the refinement

Although they did not use the term epidemiologic

of risk quantification; and (f) differentiation of disease

tradition, their

tradition is substantially the same as the epidemio-

outcomes.
The psychologic tradition to studying stress as a

logic perspective to which I subscribe.

risk factor for disease is described by Cohen et al.

definition of this environmental

The broad aims in the epidemiologic tradition are


twofold: (a) to demonstrate, if possible, the indepen-

(1995) as placing "emphasis on the organism's


perception and evaluation of the potential harm posed

risk

by objective environmental experiences" (p. 6). Some


of the possible parallels between the role of such

factors, to disease etiology; and (b) to explore, if

perceptions and the role of biomarkers listed above

possible, the interplay of the exposure variable and

are intriguing. Certainly, measures of perceived threat

the biomedical variables in contributing to disease

or excessive demand are fully analogous to measure-

etiology.

dent contribution of the exposure variable, beyond the


contribution

This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

(d) elaboration of biologic

of the

established

biomedical

epidemiologists are (a) to demonstrate the etiological

ment of internal exposure. Similarly, measures of


dysphoria or distress could be parallel to measure-

role of the exposure variable; (b) to show that this role

ment of early biologic response or perhaps also to

remains after adjustments for necessary confounders

elaboration of ...

and control variables; and (c) to learn as much as

and refinement of risk quantification. And the


identification of effect-modifying psychosocial host

Stated another way, the objectives of

possible about the underlying mechanisms and the


moderating influences involved in the etiological

characteristics represents a major theme in the


psychologic tradition.

relationship.
Epidemiologic methods have evolved as a strategy
for the study of risk factors for various diseases at the
population level. They are generally not suitable for
the additional study of the various mechanisms that
could be involved as intermediate links between the
exposure and the disease outcome. This leaves a
serious problem of the unopened "black box." In the
study of work stress and disease, the need to peer
inside the black box is particularly important because
the exposure variable may be an exceedingly complex
one, and learning about the intermediate steps may be
crucial to a full understanding of the etiologic
process.
Within classical occupational epidemiology, such
as that dealing

mechanisms in disease induction

with carcinogens

in the work

environment, molecular biomarkers hold considerable promise of letting a person peer inside the black
box. McMichael (1994) outlined the ways in which
the biomarkers can strengthen the usual "bare bones"
strategy that designates certain work settings as high

However, there are also important

differences

between the biomarkers and the psychologic tradition. The biomarkers are selected to be on the
mechanistic pathway to disease occurrence. The
psychological variables, on the other hand, may or
may not represent the intermediate steps in disease
etiology. Specifically, it needs to be documented,
rather than just theorized, that the effect of objective
exposures on disease is fully mediated by specific
appraisals, or that felt distress mediates between
exposure and disease outcome. Obviously, whether
one is studying mental health or physical health
outcomes will matter considerably in terms of how
well the psychological markers behave like the
biomarkers.
Johnson (1996) pointed out that while epidemiology has made useful contributions to occupational
health psychology, it is a limited contribution because
"it lacks a theory of human development" (p. 7). This
is felicitous phrasing because it points us in the
direction

to

which

classical

epidemiologic

ap-

exposure and then follows exposed and unexposed

proaches need to be broadened. It also reveals the

workers (prospectively or through historical cohort

enormity of the challenge of studying work stress


comprehensively.

designs) for the development of a specific cancer.


Biomarkers can assist with the following: (a) the
measurement of internal exposurenow instead of
all workers in the exposed work setting being
considered at high risk, there is a way of selecting
only those who give evidence that the carcinogen
entered their body; (b) the measurement of early
biologic responseone can identify those who are

Dimensionalizing the Work Environment


and Measuring Work Stress

showing precursors to the cancer or early pathologic

A reading of the four articles dealing with the four


instruments inevitably raises questions of context:
How do these instruments compare with other

changes; (c) the measurement of effect-modifying

instruments that are already available? Do they fill a

This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

392

KASL

void in a particular area of measurement? Are they


dealing with the really important dimensions? These
questions turn out to be extremely difficult to answer.
There is no Mendeleeff-type periodic table of work
dimensions that can show whether some measures are
still in need of developing or that there are gaps in the
established measures. Nor has anyone tried Cattell's
(1957) approach to mapping the whole domain of
personality traits by developing a dictionary of
descriptive words and phrases pertaining to the work
environment, eliminated synonyms and redundancies, and came up with a "reduced" list of separate
dimensions. Nor has anyone tried to start with some
dictionary of occupational titles and transform these
into a taxonomy of characteristics of the work
environment.
In an earlier publication (Kasl, 1991), I tried to
create categories of dimensions that are likely to have
an impact on health. I came up, however tentatively,
with 10 categories:
1. physical (hygienic) conditions at work;
2. physical aspects of work;
3. temporal aspects of work day and work itself;
4. work content;
5. interpersonalwork group;
6. interpersonalsupervision;
7. financial and economic aspects;
8. organizational aspects;
9. community and societal aspects; and
10. changes (or threatened changes) in the work setting,
including loss of work.

Of the above categories, Numbers 4 and 8 were


particularly rich in the numbers of dimensions
subsumed within each category.
If one takes the list of 10 categories as a skeletal
framework and then tries to ask which of the
dimensions or categories are covered by the four
measures presented in the four articles, one discovers
that this cannot be done equally well for all four, The
Karasek and Spector instruments do attempt to
measure several aspects of the work environment, but
the JSS and the PMI start out with an explicit stress
orientation that is more transactional and does not
allow a full reconstruction of what work dimensions
are specifically involved in the stressful experiences.
This is more so for the JSS than the PMI.
The JSS deals with 30 stressor events "designed to
assess generic sources of occupational stress encountered by men and women employed in a wide variety
of work settings" (Vagg & Spielberger, 1998, p. 298).

Items such as "perform tasks not in job description"


and "assigned increased responsibility" are not easily
coordinated to well-known constructs such as role
conflict, role ambiguity, and overload. However, such
items are likely to be phenomenologically more
faithful to the experience of workers, particularly if
they were based initially on open-ended narratives of
the workers' experiences, than they would be if driven
by the need to fit a particular theoretical construct.
Vagg and Spielberger (1998) emphasize the JSS's
innovative feature of measuring both the severity and
the frequency of the stressor events, similar to the
procedure used with the Social Readjustment Rating
Scale. (Of course, considerable controversy has
existed over the choice between an individual's own
weights for the life events and population-based
weights and the merits and disadvantages of each;
Kasl, 1983.) This innovative feature fits some items
better than others. For example, "inadequate salary"
hardly seems like a true event, so frequency ratings
must be a bit artificial. On the other hand, some true
events tend to vary considerably in severity (for the
same person on the same job), such as "dealing with a
crisis situation," so the severity rating may also be a
bit artificial. In addition, it is a bit puzzling and
distressing that the factor structure of the severity of
stressor events (which is a rating presumably
independent of whether or not those events apply to
the respondent) is very similar to the factor structure
of the frequency reports, because these are meant to
actually describe their situation. The readers will
want to know what the intercorrelations are among
the four job pressure and lack of support scales based
on severity and then again on frequency. Substantial
correlations between severity and frequency would
tend to undermine the claim that this innovation is
truly yielding new information. Finally, the reader
could use a little more help in interpreting the table of
gender differences because it is difficult to know how
much they are due to differences in the work settings
of men versus women and how much they are true
gender differences in responding to the same work
setting.
Given the origins of the JSS in measuring stress of
policemen and teachers, one may speculate that the
instrument is more appropriate for work settings that
involve a fair amount of person-to-person interaction
and less appropriate for work settings dominated by
person-to-machine interactions. There is a sizable
literature on health effects of machine-paced, physically constraining, high-vigilance jobs (e.g., Sauter,
Hurrell, & Cooper, 1989), yet one wonders how
highly stressed these workers would appear using the

This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

SPECIAL SECTION: AN EPIDEMIOLOGIC COMMENTARY

JSS instalment. Similarly, how appropriate is the JSS


for the study of monotonous jobs and for distinguishing between repetitive monotony and uneventful
monotony, with the former being the more pathogenic
(Johansson, 1989)7 Perhaps what is most useful here
is to suggest that for some work settings, a measure of
work stress such as the JSS will account for much of
the variance in health impact, whereas for other work
settings the focus needs to be more on measuring
specific work dimensions because these, rather than a
measure of work stress, will explain much of the
variance in health impact.
The other new measure described is the PMI
(Williams & Cooper, 1998). The approach here, as
with the JSS, is representative of the psychological
approach to the stress process (Cohen et al., 1995),
which emphasizes the perceptions of pressures. Also
like the JSS, the PMI appears more appropriate for
work settings dominated by person-to-person interactions and less appropriate for settings dominated by
person-to-machine interactions. It is ironic that
Williams and Cooper (1998, p. 307) offer a paraphrase from Pratt and Barling (1988) to the effect that
"it is as important to measure the interpretation that
individuals give to an event as it is to measure the
event itself." Yet the PMI measures the perceptions
only, not "the event itself." Table 4 of William and
Cooper's article shows the intercorrelations among
the eight sources of pressure; the median correlation
is .48, fairly high given that they were seeking to
obtain relatively independent dimensions.
For such a new instrument, the track record of its
use, as seen in Tables 6 to 13 (Williams & Cooper,
1998), is quite impressive. But it detracts from
appreciating this accomplishment when one encounters passages of aggressive marketing. For example,
Williams and Cooper (1998) acknowledge "frustration with the inability of existing instruments [before
the Occupational Stress Indicator (OSI) and PMI] to
provide a comprehensive, integrated assessment of
occupational stress that could be used to provide
practical help to individuals and their organizations"
(p. 307). In the first paragraph in the conclusion
section ("Using the PMI"), they return to the theme
that the PMI will provide standardization in both the
conceptualization and measurement of occupational
stress that has not existed before. I believe the real
culprit is not the absence of measures, but that in the
stress field there are major, fundamental differences in
orientation and formulation of central concepts and of
the etiological process. Thus far, no single measure
and no single formulation have succeeded in dominat-

393

ing the field. In particular, there is simply not enough


empirical evidence on health impact of the work
environment to embrace completely the psychological perspective and discard the others.
In Williams and Cooper's (1998) Figure 1, the
authors offer a "model" of the occupational stress
process in terms of sources of pressure and effects,
with individual differences (presumably) representing
a set of moderators. This organization of variables
appears a bit hasty and arbitrary, and one cannot help
but wonder whether the psychological emphasis on
perceptions does not sometimes create an ambiguity
about what is the independent and what is the
dependent variable. For example, organizational
security is classified as an effect, but most researches
looking at job loss and job insecurity would consider
it (perceived) environmental characteristic. The authors also call resilience an effect, but in most
formulations it is treated as a moderator. On the other
hand, the source of pressure called Home/Work
Balance is viewed by many, using the term spillover
effect, as an outcome. (I acknowledge that without
seeing the actual items, I cannot be sure how
appropriate my comment here is.) It is somewhat
ambiguous whether Daily Hassles, referring as it does
to "day to day irritants and aggravations in the
workplace," is an independent or a dependent
variable. The criticism by Dohrenwend, Dohrenwend,
Dodson, and Shrout (1984) regarding the Lazarus
Daily Hassles scale is a good illustration of the
difficulty of interpreting this variable properly.
The other two measuresSpector and Jex's (1998)
and Karasek et al.'s (1998)are more established
instruments. Furthermore, they are more in the
tradition of measuring, through self-reports, aspects
of the work setting or organizational environment and
do not explicitly intend to reflect the transactional
stress process.
Spector and Jex (1998) offer a wealth of crosssectional correlational information, showing the
associations of the three job stressor scales with other
measures of job stressors and several strain indicators. Yet it is difficult to process this information
about the mix of high, moderate, and low intercorrelations and translate it into some conclusion about
where the instrument is performing well and in what
other areas it needs improvement. It is also difficult to
answer, on the basis of this rich array of correlational
information, such questions as: How well would these
three scales predict disease outcomes? Are the three
scales meant to cover the work environment domain
comprehensively, or are they only trying to fill the
gaps left by other instruments? When should one

This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

394

KASL

prefer the Spector instrument over the many other


measures available to investigators?
The JCQ of Karasek et al. (1998) comes closest to
the environmental stress tradition that Cohen et al.
(1995) described. Karasek is the most conscientious
in considering the issue of objective measurement of
work dimensions; in most instances, he uses occupational titles as a proxy for such objective measurement. The JCQ also has the most extensive accumulated evidence on relationships to physical health,
primarily cardiovascular disease and cardiovascular
risk (e.g., Kristensen, 1996). The Karasek et al. article
itself deals with the use of the JCQ across countries
and in different language versions. This crosssectional evidence is certainly impressive; it shows
comparable means, reliabilities, and scale intercorrelations and does support the notion that major
national differences in psychosocial job characteristics do not exist. (It is also a testimony to the careful
construction of the various language versions of the
JCQ.) At the same time, it is slightly disconcerting
that the presentation of the international data is not
accompanied by any hypotheses that are being tested.
Evidence is more compelling when differential
predictions are made and differences obtained than
when there is only a generalized expectation of
similar findings. For example, it would have been
interesting to see among the samples data from a
country for which one would be expecting striking
differences in some of the dimensions, or in the
correlations among the dimensions. Possibly the most
compelling evidence will come in the future when
one has information on how these scales predict the
various health outcomes in these different countries.
The juxtaposition of these four articles may have
the effect on some readers of wishing to see studies
using all four instruments in order to learn more about
relationships across instruments; it certainly had that
effect on me. For example, how does Specter's
organizational constraints scale relate to Karasek's
decision latitude dimension, and does this vary across
jobs? Some blue-collar jobs are designed with low
decision latitude to facilitate and enhance performance and productivity (however inhumanely this
may be carried out); thus they may be relatively low
on organizational constraints. Some managerial jobs
with high decision latitude, on the other hand, may
exhibit a wide range of levels of organizational
constraints. This might suggest that, in those jobs,
including the Spector scale with the JCQ instrument
might improve prediction of health outcomes.

A Note on the Measurement of Job Strains


In the highly influential ISR theoretical model
(e.g., Caplan, Cobb, French, Van Harrison, &
Pinneau, 1975), the term strain was used to designate
psychological, physiological, and behavioral reactions and to distinguish strains from presumed, more
distal health-illness outcomes. In the four articles in
this issue, which describe their measures of work
dimensions or work stress, two do not include with
their instruments (JSS and JCQ) any measures of
strains. Spector and Jex (1998) include one measure,
physical symptoms, and the PMI (Williams &
Cooper, 1998) packages with its measurement of
eight sources of pressure nine measures of "effects"
as well as seven measures of "individual differences."
It is not clear to me why instruments for measuring
a set of environmental exposure variables should
include also measures of some possible outcomes or
effects. For one, these outcomes are bound to be only
a small subset of the various outcomes that can be
studied in connection with the exposure variables,
and thus the selection of a particular set of strain
measures to go with the stressor measures is likely to
limit the horizons of future investigators using these
instruments. Moreover, although these measures of
work dimensions and job stressors have a reasonable
claim on representing state of the art in the field, such
claim about their strain measures is less compelling.
This is partly because not as much careful work went
into choosing them or developing them, and partly
because there is a felt constraint on length of the total
instrument. An investigator who wishes to measure
job satisfaction, depression, or physical symptoms
should scour the appropriate literature and choose the
best possible measure that fits his or her study needs.
In the long run, it is preferable not to have separate
bodies of literature, for example, for community
psychology, family relations psychology, and organizational psychology, all dealing with similar mental
health and well-being concepts but using different
sets of measures.
However, there appears to be another issue of
considerable importance hidden behind the above
comments about including versus excluding strain
measures in the total package of instruments. Should
one measure these strains differently when studying
the impact of work than when studying, say, the
impact of interpersonal relations? Is one trying to
measure physical symptoms or symptoms of depression and anxiety that are due to the work setting only
and exclude other contributors to these symptoms?
The highly influential work of the University of

This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

SPECIAL SECTION: AN EPIDEMIOLOGIC COMMENTARY

395

Michigan investigators (Caplan et al., 1975) did not

lems); even directly asking how symptoms change as

set a clear precedent. In the major National Institute

the individual leaves one setting and goes to another

of

depression,

seems a reasonable procedure for these issues. But for


symptoms of depression, anxiety, or low self-esteem,
one has generally assumed that the symptoms are not

anxiety, and irritation, reads: "Here are some items

so evanescent and that therefore a mere switching of

about how people may feel. When you think about

the setting in which the same instrument is adminis-

yourself and your job nowadays, how much of the


time do you feel this way?" (Caplan et al., 1975, p.

tered will not yield useful information. And behavioral indicators, such as smoking or alcohol consump-

Occupational

Demands

and

statement

for

Safety
Worker

the

and Health
Health,

scales

the

measuring

study, Job
introductory

274). There is a very gentle nudge for the respondent

tion, because of various environmental constraints,

to connect the symptom reporting to the job, but of

are especially unsuitable for inferring the impact of

course one does not really know what the effect of

work from the differential frequency with which mey

such instruction is. For the somatic complaints scale,

are engaged in at work and then in some other setting


later.

the instruction is, "Have you experienced any of the


following during the past month on the job?" The

It would seem that what is left are two strategies:

directive is strong and appears clear enough, but it is

(a) to ask the respondent to report only on symptoms

difficult to see how well the respondents can actually


comply. The list of symptoms is traditional, including

that he or she attributes to the work setting or (b) to


ask simply about the presence of symptoms (with no

"trouble sleeping at night" or "loss of appetite"

hints of attribution) and design the study and analyze

symptoms that could be a consequence of work stress

the data so that the impact of the work setting can be


detected and isolated. The latter strategy seems to be

but would not manifest themselves necessarily "on

the job."
With physiological

the traditional approach and is clearly preferable; thus


measures,

such as

blood

pressure or stress hormones, there is no issue of


changing operational procedures. What is at issue is
how to time the set of measurements and to choose
the settings in which they are carried out, so that they
reveal the specific effect of the job setting. The
"simple" strategy of measuring, for example, blood
pressure at work and then again at home after work or
in the morning before work begins, is not really that
simple given issues such as rate of "unwinding" and
anticipatory arousal. Furthermore, as noted by Hurrell
et al. (1998), different indicators have different rates
at which they respond, remain elevated, and return to
normal, so that physiological indicators have to be
carefully selected and the scheduling of measurements has to be carefully considered. Particularly
problematic is detecting effects of prolonged stressors
if many biological

systems show exhaustion or

biological adaptation.
For symptom checklists and other psychological

one should be using generic instruments, such as the


Center for Epidemiological Studies Depression Scale
for measuring depression (Radloff & Locke, 1986),
which are not targeted for specific settings. The
former strategy appears particularly undesirable if
one is working within the psychological tradition of
subjective perceptions of the work setting, because
the same attributional processes that lead to the
perceptions and to the symptom reporting will be
involved. It must be acknowledged, however, that the
necessary methodological studies are lacking for one
to know enough about the possibly biasing influences
of the attribution dynamics, the direction of the bias,
and the specific characteristics of individuals that
affect the degree and the direction of the bias. And, of
course, if one is studying the phenomenology of the
work stress experience, most of these concerns over
methods become irrelevant. However, the study of
phenomenology is not a study of health effects.

Varieties of Trivial Associations

and behavioral indicators, the strategy of administering the same instruments in different settings and on
issues, such as respiratory symptoms in studies of

In several of the articles submitted to this special


section, the authors refer to some of my previous
publications on this general topic (e.g., Kasl, 1978,

sick building syndrome (Bourbeau, Brisson, &


Allaire, 1997) or studies of musculoskeletal disorders
in relation to the use of video display terminals

1987, 1989, 1991). Specifically, I appear to be


strongly linked to two related criticisms that I have
stated (and apparently overstated, if most of my

(Moon & Sauter, 1996), the strategy still works


well (provided one is not studying

colleagues are correct) in the past: (a) Given certain


design features of work stress studies, a fair

individuals with irreversible musculoskeletal prob-

proportion of this corpus of research is reporting

different occasions is less useful. True, for specific

reasonably

This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

396

KASL

rather trivial (usually cross-sectional) findings, and


(b) there should be a clear preference for measuring
dimensions of work "objectively." I will deal with the
first point more briefly because the second one is the
more important one, as well as being an important
"solution" to the first problem.
There are some obvious issues. One is the relation
of a new study to the accumulated evidence from
previous studies. For example, there is no need for
another report of a cross-sectional association (in the
moderate range) between some measure of job stress
and a measure of job satisfaction. What is needed are
tests of more precise hypotheses to advance the field,
for example, on what jobs and with what types of job
stress are the associations particularly high or
particularly low. Another is the situation when a
cross-sectional design cannot rule out reverse causation, for example, the effects of presence of disease on
the putative risk factor; this is rather trivial if one is
trying to pin down the role of the risk factor.
However, cross-sectional associations that are ambiguous with respect to direction of effect need not be
trivial if the association, no matter how interpreted, is
of some interest.
Other aspects of triviality may be subject to more
disagreement. One is when an investigator chooses
two study variables as the independent and dependent
ones (conceptually at least, not necessarily in the
study design sense), whereas a critic argues that the
two variables reflect the same underlying construct
and that the association is no more than a reflection of
what happens when two indicators of the same broad
construct are correlated with each other. For example,
the Burnout Measure (BM; Pines & Aronson, 1988),
which reflects exhaustion (physical, mental, and
emotional), may be used to "predict" depression
when in fact the BM can be seen as a major
component of depression. Or, the Daily Hassles
measure of the PMI (the day-to-day irritants and
aggravations in the workplace) can be used as a
predictor of Physical Symptoms (how calm a person
feels), but some may argue that the constructs are
very similar.
Another aspect of potential triviality is similar to
the notion of conceptual overlap but might be called
causal overlap: Researches divide the causal development process into trivially small steps in trying to
predict the transition from one step to the next or fail
to identify the important transitions that are worthy of
attention. For example, in a work situation of high
insecurity, some may perceive it as a timely
opportunity to go back to school and acquire more
training, whereas others may perceive it as a major

threat to their livelihood and to their ability to be an


adequate breadwinner. If one decides to study the
relation of these differential perceptions to such
outcomes as depression and alcohol consumption,
then one has chosen a rather trivial step. The
important issue is how some people come to perceive
the insecure work situation one way or the other. The
emphasis on perceptions, on coping, and on reappraisals in the stress process may make it difficult to
identify the important linkages or transitions one
should be studying.
Perhaps the most controversial area of potential
triviality is the notion that shared method variance, or
shared response biases, in the assessment of both the
independent and the dependent variables (the exposures and the outcomes) creates spurious or inflated
associations. Negative Affectivity (NA; Watson &
Clark, 1984) at present is the most frequently invoked
concept around which the concerns over confounded
or inflated estimates center. Both the dispassionate
analysis of the issue (Hurrell et al., 1998) and the
passionate dismissal of the solutions to this problem
as "triviality turnaround" (Karasek et al., 1998)
reveal that neither the issue nor the evidence is that
straightforward. One may recall earlier days in
assessment psychology when acquiescence and social
desirability were the concepts that held central stage.
In retrospect, one might conclude that the issue of
social desirability was overstated by paying attention,
for example, to the magnitude of negative (ecological) correlations between population symptom frequency and the population ratings of social undesirability of the symptoms. And the field never managed
to develop satisfactory independent measures of the
presumed response tendencies of acquiescence and
social desirability. So one is on the side of history if
one argues that the statistical adjustments for NA may
not be the right solution. But there is an issue here,
nevertheless.
It can be reasonably argued (Kasl & Rapp, 1991)
that there is as yet no adequate direct measure of NA
as a responding tendency. The existing measures
could reflect a mix of (a) this responding tendency, (b)
a stable trait such as neuroticism reflecting vulnerability to negative affect during life's daily challenges, (c)
a stable trait indicating a stable level of negative
affect actually being present (not a vulnerability), and
(d) a state level of negative affect. What is not clear is
how the concerns over potential confounding or
inflated estimates of associations are altered when one
concludes that the measure does not reflect a
responding tendency but indicates, for example, a
stable negative affect instead. There is still concern

397

This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

SPECIAL SECTION: AN EPIDEMIOLOGIC COMMENTARY

that this stable trait may influence the assessment

should note that because this argument is really about

dynamics involving perceptions of the work environ-

the convenience of self-report measures, which are

ment and various outcomes.


Beyond this influence on assessment, one wishes to

not to be simplistically equated with subjective

identify the proper directional influences among the


variables of interest. For this, one needs stronger

rejection of the need for objective measurement

measurement, the argument is thus only an oblique


strategies.

designs, such as prospective follow-up of workers in

I find it helpful to look to other domains of the

a "natural experiment," and additional sources of

social-behavioral sciences and find parallels to this

data, such as studying contrasting jobs rated by


others, to be clearly different on selected work

debate in occupational health psychology. In studies

dimensions

"objective"

dealing with the concept of crowding, there are

measures of the work setting). For example, to


demonstrate that a measure of trait anxiety indicates

similar calls to anchor one's subjective variables to

greater vulnerability but only under conditions of

Taylor, 1980). One author (Wohlwill, 1973) stated the

high work stress (rather than a pervasive influence on

criticism succinctly in the title of his chapter, "The

perceptions of the work environment and on dysphoria), one would probably need contrasting job settings

environment is not in the head." With respect to

that are not based on the respondents' perceptions.

operationalization of the relationship between num-

(this being a proxy for

of the residential environment, particularly those

objective situational dimensions (e.g., Archea, 1977;

crowding,

one

would want

some

kind

of

an

ber of people and the amount of space, and then want


to deal with some of the psychological aspects, such

The Need for "Objective"


Measurements of Exposure

as excess of social stimuli (demands), plus inability to


control when one receives the demands and how one

It is unfortunate that the debate over objective


versus subjective strategies of measurement is carried
out with terms that are quite imprecise and unnecessarily polarizing. Particularly unfortunate is the tendency
of

some to equate subjective

with

"based on

self-report" and objective with "not using selfreport." This hides many of the subtleties of the issues

should respond. But when there is no anchoring to


environmental conditions, the concept begins to drift,
such as when Stokols (1976) concluded that fighting
in marriage is a good example of crowding. There is
also the triviality danger, as in the following example
from a major study of crowding (Gove, Hughes, &
Galle, 1979): one of two measures of crowding was

involved. Formulations of the distinction between

called felt demands (people making demands, being

objective and subjective, such as that of Frese and

interrupted frequently, etc.), whereas one of the

Zapf (1988), are far removed from such simplistic,

outcomes was the scale "children are a hassle"

facile equivalence.

(children get in the way, children make too many

The arguments against the use of objective


measures seem to fall into two categories. One is the

demands).

psychological approach to stress (Cohen et al., 1995),

crowding demonstrates, as does the research on work

Thus, the literature on residential environment and

which argues that the perceptions of the environmen-

stress and health, that in interdisciplinary domains,

tal condition are the relevant independent variables or

there is a need to accommodate the potentially

exposures for study. A strong version of this position

conflicting environmental and psychological perspec-

would argue that the causal pathway to health

tives. However, there is also a need to be sensitive to


the possibility that in the study of some research

outcome works only through these perceptions and


that one does not need to concern oneself with the

questions, one or another perspective should be the

objective conditions because, though they may be

dominant one. For example, in a set of studies of

antecedent to the perceptions, they are unnecessary to


an adequate study of the process. However, such a

crowding in a college dormitory setting (Aiello,

strong version is seldom explicitly enunciated. The

Wideman, 1979; Reddy, Baum, Fleming, & Aiello,

other category of arguments against objective measures is a pragmatic one: Objective measures of job

1981), the authors switched from an environmental to


a psychological (interpersonal) perspective when they
realized that the number of roommates per unit was

characteristics are difficult to identify and obtain, and


are expensive, time consuming, and not really worth
the trouble. A pure version of this argument is also
seldom enunciated but is usually supplemented by
arguments from the psychological perspective. One

Baum. & Gormley, 1981; Baum, Shapiro, Murrey, &

not the issue; rather it was whether the number of


roommates was odd or even, because odd numbers
(three or five) promoted coalition formation and
created more adverse living circumstances.

This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

398

KASL

Developing measurement strategies for the objective work environment is, indeed, a formidable task
(e.g., Hacker, 1993), and studies that have nationally
representative samples of employed individuals, or
even just include a large number of occupations in
their study, simply cannot commit to such an
approach. However, in studies of single occupations,
such as bus drivers (Greiner, Ragland, Krause, Syme,
& Fisher, 1997), developing relevant objective
dimensions need not be a daunting task. Also, when
one is working with such outcomes as cumulative
trauma disorders (Moon & Sauter, 1996), the
ergonomic perspective on man-machine interactions
needs to be fully represented.
There have been many published studies over the
years in which authors had at least one objective
measurethe job title or job classificationbut
failed to use it in their analysis and combined data
across jobs when examining stressors and strains.
Paying attention to job titles is a minimal strategy of
utilizing objective data and is often quite informative.
In the research on the job strain model using the JCQ,
two of the three primary dimensions are decision
latitude and (psychological) job demands. As described by Karasel and Theorell (1990), differences
between occupations explain about 35% of the
variance in latitude but only about 4% in demands
(the Karasek et al., 1998, article gives slightly
different estimates). Several questions might be
asked:
1. Are occupational titles too crude a classification
to pick up variation in job demands, or are job
demands a subjective reaction almost uncorrelated
with objective work conditions, so that a more refined
grouping of jobs would not increase the explained
variance? More of the variance is explained if age of
workers is taken into account as well, but this is a
person characteristic, not an additional job characteristic. It is clear that one needs to explore ways of
supplementing job titles with additional information
about the jobs to see if more variance could be
explained.
2. Does job demands measure psychological
reactions to objective work conditions, albeit with
enormous individual differences, or does it measure
mostly preexisting personal characteristics that would
manifest themselves in a similar way on different
jobs? It would seem that additional information is
needed on objective job characteristics to decide
which it is.
3. Is the health impact of the work setting most
appropriately understood as a link to the objective
work conditions, to the subjective measures, or to

some unique combination of the setting and the


reactions? If job categories are not related to job
demands and if job categories are related to coronary
heart disease (CHD), then job demands are not likely
to have a mediating role, though they could be an
independent risk factor. Similarly, if job categories
are not related to job demands and demands are
related to CHD, then jobs are not likely to be
antecedent though they could be an independent risk
factor.
4. How does one know how to develop ameliorative intervention strategies. Does one try to change
the work conditions (but which ones?), the reactions
of the workers, or some specific reactions of workers
in specific settings?
It may often be necessary to pay attention to the job
title in interpreting the meaning of the perceptual
measures. For example, the item "Is your job
hectic?" may have different meanings; when asked of
blue-collar workers in various assembly-line and
machine-paced jobs, it may reflect a specific aspect of
pacing, plus some elements of quality control and
allowances for taking breaks. However, if one is
studying a group of occupations that include also
managers, teachers, fanners, and doctors and does not
separate the specific occupations, then the meaning of
high-low scores across such a mix of occupations
may be hard to interpret. In fact, the only common
element the measure may have could be the effect of
preexisting personal characteristics, something that
should be avoided.
As another example, let us consider measures of
role ambiguity. The items deal with clarity of job
responsibilities and work objectives, as well as with
clarity and predictability of the expectations others
have about the respondent. The curious fact is that
administratorsa group for whom the concept was
practically inventedare quite a bit lower than such
uncomplicated blue-collar occupations as fork-lift
drivers and machine tenders (Caplan et al., 1975).
How could that be? It is possible that in developing
scales for "all" occupations, one makes the items
relatively general and not anchored to specific and
concrete work conditions. Thus, respondents may be
thinking of different aspects of work (but relevant for
their job) when trying to answer the items, and
combining data across many jobs may mix in the
different meanings of the scales.
The various considerations in the "subjective"
versus "objective" dilemma may be summarized
below.
Arguments in favor of objective measurement
strategies may include the following:

399

This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

SPECIAL SECTION: AN EPIDEMIOLOGIC COMMENTARY

1. We will have a clearer linkage to the "actual"

of Psychosocial JCQs, in which Karasek raises the

environmental conditions and will know better what

disturbing possibility that methodological criticisms

aspects of the environment needs changing.


2. We will have a clearer picture of the etiological

the attacks on subjective measures and measurement

process, because it is less clear what all the influences

confounding, give "comfort to the enemy." Such

are on the subjective measures.


3. There will be less measurement confounding
when outcomes are psychological and behavioral.

criticism, he indicates, is a worker-blaming stance

4. There will be a clearer separation of where the

work environments" (p. 350). My first reaction was:

independent variable ends and the dependent variable


begins, whereas with subjective measures we are

found a way to get people like me to shut up."

already somewhere along the trajectory of reactions

Unfortunately, I have had additional reactions since

and impact.
Arguments in favor of a subjective measurement

the initial one.

strategy may include the following:


1. The "meaning" of exposure varies substantially

point better. If Karasek is right that the methodologi-

across individuals.
2. Cognitive and emotional processing moderates

ately applied, then this debate can be carried out

the overall etiological process and the subjective


exposure clarifies the etiological mechanism.
3. Environmental manipulation is not possible,
only differential reactivity of subjects can be addressed.

of the work stress and health literature, particularly

that "may be having the unfortunate effect of taking


the heat off the world's business leaders to humanize
"What a devilishly clever argument. Finally, Bob has

One reaction is that I would like to understand his


cal criticisms have been exaggerated or inappropriwithin the parameters of what is good and feasible
science in the field setting; he does not need the heavy
artillery of invoking the danger of "giving comfort to
the enemy." I find it paradoxical that while Karasek is
one of the most conscientious investigators when it
comes to confronting this issue and dealing with it in

4. Objective measures are hopelessly trivial or are


outside of any possible causal chain (in Lewin's
terminology, not part of life space but in the foreign
hull).
Pragmatic considerations have also been part of
this debate. Fundamentally, self-report measures tend
to be more easily available, cheaper, and more
convenient. Objective measures, such as for the work
environment, can be expensive, clumsy, and difficult
to obtain. Given the greater convenience of selfreports, on the one hand, and the reluctance to rely
exclusively on subjective strategies, on the other
hand, one must find strategies for collecting information from respondents while minimizing cognitive
and emotional processing. For example, many of the
scales ask respondents not only about whether some

his research, he is also the one most distressed by the


criticisms. However, if Karasek's point is that we
should not "wash our dirty linen in public," then I
feel that I cannot accept his strategic recommendation. If the evidence has limitations, the "enemy" is
sure to find out about it. I am reminded of the fact that
the most sophisticated criticism of epidemiologic
methods and of evidence for causation has come from
the tobacco industry (and often from the industry
lawyers rather than the hired scientists).
My second reaction is that the field has always had
striking divergence of perceptions regarding work
stress, and it is extremely unlikely that it is based on
differences

in methodological

evaluations of the

evidence. As Singer, Neale, Schwartz, and Schwartz

environmental conditions exist but also if they were

(1986) have shown in their study, labor organizations

bothered or distressed by them, all in the same item.

and corporations define stress quite differently, with

This brings in too much of the emotional processing.

the former adopting an environmental approach and

Similarly, one can get at the meaning of an event,


such as job loss, by asking what it meant and how the

characterizing stressors" (p. 179). Enlightened coun-

person reacted. Or, better, one can formulate ideas


about differential vulnerability, such as stage of life

the work setting, whereas conservative countries like

cycle, dependents at home, or other wage earners, and


create the "meaning" of the event out of a

the United States offer, at best, stress management


programs. Yet it is die same evidence out there for
them to see and evaluate.

combination of relatively objective characteristics.

the latter having a strong

"tendency

to avoid

tries like Sweden and Finland attempt to humanize

My third reaction is one of surprise that sometimes

Conclusion

an issue can be seen so differently even if the

At the end of Karasek et al.'s (1998) article, there is


a section called Implication for Broad Interpretability

One of the major reasons for arguing for the

individuals do not differ in their fundamental values.


importance of measuring some aspect of the "objec-

400

KASL

tive" work environment has been precisely that


investigators will be in so much better a position of
knowing what aspect of the work setting needs
changing. I see the validity of Karasek's concern, but
I am not sure that it belongs in these ongoing debates
about work stress methodology.

This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

References
Aiello, J. R., Baum, A., & Gormley, F. P. (1981). Social
determinants of residential crowding stress. Personality
and Social Psychology Bulletin, 7, 643-649.
Archea, J. (1977). The place of architectural factors in
behavioral theories of privacy. Journal of Social Issues,
33, 116-137.
Baum, A., Shapiro, A., Murrey, D., & Wideman, M. V.
(1979). Interpersonal mediation of perceived crowding
and control in residential dyads and triads. Journal of
Applied Social Psychology, 9, 491-507.
Bourbeau, J., Brisson, C., & Allaire, S. (1997). Prevalence of
the sick building syndrome symptoms in office workers
before and six months and three years after being exposed
to a building with an improved ventilation system.
Occupational and Environmental Medicine, 54, 49-53.
Caplan, R. D., Cobb, S., French, J. R. P., Jr., Van Harrison,
R., & Pinneau, S. R., Jr. (1975). Job demands and worker
health (HEW Publication No. [NIOSH] 75-160). Washington, DC: U.S. Government Printing Office.
Cattell, R. B. (1957). Personality and motivation structure
and measurement. Yonkers, NY: World Book.
Cohen, S., Kessler, R. C., & Gordon, L. U. (1995). Strategies
for measuring stress in studies of psychiatric and physical
disorders. In S. Cohen, R. C. Kessler, & L. U. Gordon
(Eds.), Measuring stress (pp. 3-26). New York: Oxford
University Press.
Dohrenwend, B. S., Dohrenwend, B. P., Dodson, M., &
Shrout, P. E. (1984). Symptoms, hassles, social supports,
and life events: Problem of confounded measures.
Journal of Abnormal Psychology, 93, 222-230.
Frese, M., & Zapf, D. (1988). Methodological issues in the
study of work stress: Objective vs. subjective measurement of work stress and the question of longitudinal
studies. In C. L. Cooper & R. Payne (Eds.), Causes,
coping, and consequences of stress at work (pp.
375-411). Chichester, England: Wiley.
Gove, W. R., Hughes, M., & Galle, O. R. (1979).
Overcrowding in the house: An empirical investigation of
its possible pathological consequences. American Sociological Review, 44, 59-80.
Greiner, B. A., Ragland, D. R., Krause, N., Syme, S. L., &
Fisher, J. M. (1997). Objective measurement of occupational stress factorsAn example with San Francisco
urban transit operators. Journal of Occupational Health
Psychology, 2, 325-342.
Hacker, W. (1993). Objective work environment: Analysis
and evaluation of objective work characteristics. In A
healthier work environment: Basic concepts and methods
of measurement (pp. 42-57). Copenhagen: WHO Regional Office for Europe.
Hurrell, J. J., Jr., Nelson, D. L., & Simmons, B. L. (1998).
Measuring job stressors and strains: Where we have been,
where we are, and where we need to go. Journal of
Occupational Health Psychology, 3, 368-389.

Johansson, G. (1989). Job demands and stress reactions in


repetitive and uneventful monotony at work. International Journal of Health Services, 19, 365-377.
Johnson, J. V. (1996). Conceptual and methodological
developments in occupational stress research: An introduction to state-of-the art reviews: I. Journal of Occupational
Health Psychology, 1, 6-8.
Karasek, R., Brisson, C., Kawakami, N., Houtman, I.,
Bongers, P., & Amick, B. (1998). The Job Content
Questionnaire (JCQ): An instrument for internationally
comparative assessments of psychosocial job characteristics. Journal of Occupational Health Psychology, 3,
322-355.
Karasek, R., & Thoerell, T. (1990). Healthy work. New
York: Basic Books.
Kasl, S. V. (1978). Epidemiological contributions to the
study of work stress. In C. L. Cooper & R. L. Payne
(Eds.), Stress at work (pp. 3-38). Chichester, England:
Wiley.
Kasl, S. V. (1983). Pursuing the link between stressful life
experiences and disease: A time for reappraisal. In C. L.
Cooper (Ed.), Stress research: Issues for the eighties (pp.
79-102). Chichester, England: Wiley.
Kasl, S. V. (1987). Methodologies in stress and health: Past
difficulties, present dilemmas, future directions. In S. V.
Kasl & C. L. Cooper (Eds.), Stress and health: Issues in
research methodology (pp. 307-318). Chichester, England: Wiley.
Kasl, S. V. (1989). An epidemiological perspective on the
role of control in health. In S. L. Sauter, J. J. Hurrell, Jr.,
& C. L. Cooper (Eds.), Job control and worker health (pp.
161-189). Chichester, England: Wiley.
Kasl, S. V. (1991). Assessing health risks in the work setting.
In H. E. Schroeder (Ed), New directions in health
psychology assessment (pp. 95-125). New York: Hemisphere.
Kasl, S. V., & Rapp, S. (1991). Stress, health, and
well-being: The role of individual differences. In C. L.
Cooper & R. Payne (Eds.), Personality and stress:
Individual differences in the stress process (pp. 390-401).
Chichester, England: Wiley.
Kristensen, T. S. (1996). Job stress and cardiovascular
disease: A theoretic critical review. Journal of Occupational Health Psychology, I, 246-260.
McMichael, A. J. (1994). Invited commentary"Molecular
epidemiology": New pathway or new traveling companion. American Journal of Epidemiology, 140,111.
Moon, S. D., & Sauter, S. L. (Eds.). (1996). Beyond
biomechanics: Psychological aspects of musculoskeletal
disorders in office work. London: Taylor & Francis.
Pines, A. M., & Aronson, E. (1988). Career burnout: Causes
and cures (2nd ed.). New York: Free Press.
Pratt, L. I., & Barling, J. (1988). Differing between daily
events, acute and chronic stressors. A framework and its
implications. In J. J. Hurrell, Jr., L. R. Murphy, S. L.
Sauter, & C. L. Cooper (Eds.), Occupational stress:
Issues and developments in research (pp. 41-53). New
York: Taylor & Francis.
Radloff, L. S., & Locke, B. Z. (1986). The community
mental health assessment survey and the CES-D Scale. In
M. M. Weissman, J. K. Myers, & C. E. Ross (Eds.),
Community surveys of psychiatric disorders (pp. 177189). New Brunswick, NJ: Rutgers University Press.
Reddy, D. M., Baum, A., Fleming, R., & Aiello, J. R. (1981).

This document is copyrighted by the American Psychological Association or one of its allied publishers.
This article is intended solely for the personal use of the individual user and is not to be disseminated broadly.

SPECIAL SECTION: AN EPIDEMIOLOGIC COMMENTARY

Mediation of social density by coalition formation.


Journal of Applied Social Psychology, 11, 529-537.
Sauter, S. L., Hunell, J. ]., Jr., & Cooper, C. L. (Eds.).
(1989). Job control and worker health, Chichester,
England: Wiley.
Singer, J. A., Neale, M. S., Schwartz, G. E., & Schwartz, J.
(1986). Conflicting perspectives on stress reduction in
occupational settings: A systems approach to their
resolution. In M. F. Cataldo & T. J. Coates (Eds.), Health
and industry (pp. 162-192). New York: Wiley.
Spector, P. E., & Jex, S. M. (1998). Development of four
self-report measures of job stressors and strain: Interpersonal Conflict at Work Scale, Organizational Constraints
Scales, Quantitative Workload Inventory, and Physical
Symptoms Inventory. Journal of Occupational Health
Psychology, 3, 356-367.
Stokols, D. (1976). The experience of crowding in primary
and secondary environments. Environment and Behavior,
8, 49-86.
Taylor, R. B. (1980). Conceptual dimensions of crowding
reconsidered. Population and Environment, 3, 298-308.

401

Vagg, P. R., & Spielberger, C. D. (1998). Occupational


stress: Measuring job pressure and organizational support
in the workplace. Journal of Occupational Health
Psychology, 3, 294-305.
Watson, D., & Clark, L. A. (1984). Negative affectivity: The
disposition to experience-aversive emotional states.
Psychological Bulletin, 96, 465-490.
Williams, S., & Cooper, C. L. (1998). Measuring occupational stress. The development of the Pressure Management Indicator. Journal of Occupational Health Psychology, 3, 306-321.
Wohlwill, J. F. (1973). The environment is not in the head. In
W. F. E. Preiser (Ed.), Environmental design research:
Vol. 2, Symposia and workshops (pp. 166-181).
Stroudsburg, PA: Dowden, Hutchinson & Ross.

Received August 3, 1998

Revision received August 10,1998


Accepted August 10, 1998

You might also like