You are on page 1of 17

Development of a Tool for Measuring and Analyzing Computer User Satisfaction Author(s): James E. Bailey and Sammy W.

Pearson Reviewed work(s): Source: Management Science, Vol. 29, No. 5 (May, 1983), pp. 530-545 Published by: INFORMS Stable URL: http://www.jstor.org/stable/2631354 . Accessed: 01/08/2012 07:27
Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at . http://www.jstor.org/page/info/about/policies/terms.jsp

.
JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about JSTOR, please contact support@jstor.org.

INFORMS is collaborating with JSTOR to digitize, preserve and extend access to Management Science.

http://www.jstor.org

MANAGEMENT SCIENCE Vol. 29, No. 5, May 1983 Printed itn U.S.A.

DEVELOPMENT OF A TOOL FOR MEASURING AND ANALYZING COMPUTER USER SATISFACTION*


JAMES E. BAILEYt
AND

SAMMY W. PEARSONt

This paper reports on a technique for measuring and analyzing computer user satisfaction. Starting with the literature and using the critical incident interview technique, 39 factors affecting satisfaction were identified. Adapting the semantic differential scaling technique, a questionnaire for measuring satisfaction was then created. Finally, the instrument was pilot tested to prove its validity and reliability. The results of this effort and suggested uses of the questionnaire are reported here. (COMPUTERS-SYSTEMS DESIGN/OPERATION; INFORMATION SYSTEMS, MANAGEMENT; UTILITY/PERFORMANCE, MULTI-ATTRIBUTE)

1. Introduction Measuring and analyzing computer user satisfaction is motivated by management's desire to improve the productivity of information systems. It is well recognized that productivity in computer services means both efficiently supplied and effectively utilized data processing outputs ([14], [22]). Further, it is argued that utilization is directly connected to the user community's sense of satisfaction with those services. This connection is based on results reported by a variety of researchers. In their "Behavioral Theory of the Firm," Cyert and March [34] argue that the daily environment of the organization continually imposes upon managers the need for information. If a formal information system exists, its success at meeting those needs either reinforces or frustrates the user's sense of satisfaction with that source. Evans [36] suggests that a lower limit to satisfaction exists below which the user will cease all interaction with the system and seek alternative sources. Swanson [29] empirically found high correlation in a query environment between the user's appreciation for the system and his utilization of its outputs. Powers and Dickson [24] concluded that user satisfaction is the most critical criterion in measuring computer system success and failure. Lucas [37] has shown a weak relationship between the economic performance of sales personnel and their information system utilization. Neumann and Segev [38] show low correlation between bank branch users' reaction to satisfaction factors and their organization's performance. There was no standard measure of satisfaction in these studies and exogenous variables were poorly controlled. Nonetheless, it has been argued that user satisfaction is correlated to information system utilization and systems success. An accepted measure of user satisfaction is clearly needed. This paper reports on the development and evaluation of a questionnaire designed to measure computer user satisfaction. The literature contains a number of attempts at measuring user satisfaction. In each case the users were asked to evaluate their computer services relative to a sense of satisfaction. Noland and Seward [39] gave questionnaires to several users of specific reports. On a five point scale they asked the users to rate their satisfaction with the report as a whole. No attempt was made to ascertain why the report received a given rating. Neumann and Segev [38] asked users to respond to a similar satisfaction question using four factors: accuracy, content, frequency and recency. The question as
* Accepted by Charles H. Kriebel; received May 11, 1981. This paper has been with the authors 5 months for 2 revisions. tArizona State University, Tempe. tCACI, Inc.-Federal, Arlington, Virginia.

530 0025- l1909/83/2905/0530$01l .25


Copyright ? 1983, The Institute of Management Sciences

A TOOL FOR COMPUTER USER SATISFACTION

531

to why a factor was or was not satisfactory was not asked. For example, was the report's content unsatisfactory because it was insufficient in detail or because it was irrelevant to the need? Debons et al. [35] developed a list of 10 items affecting satisfaction: accuracy, reliability, timeliness, assistance, adequacy, accommodation, communication, access, cost and environment. The users were then asked to evaluate each item on a five point scale from very unsatisfactory to very satisfactory. Once again, no qualifiers were used to indicate why an attribute was unsatisfactory. Finally, Swanson [29] measured a surrogate for satisfaction which he called appreciation. He defined this construct as "manifold of beliefs about the relative value of the MIS as a means of inquiry." His operational definition consisted of 16 factors such as timeliness and adequacy. Responses were on a five point scale using modifiers such as very, somewhat and neither. No evidence as to the completeness of his list of factors was given. The research reported here, for example, identified "flexibility" or "easy to change and adapt" as being the most important of all factors. Swanson's list did not contain this factor. The clear need is for a definition of satisfaction which contains a complete and valid set of factors and an instrument which measures not only the user's reaction to each factor but why the respondent reacted as he did. 2. Definition of User Satisfaction While seeking a model of computer user satisfaction, it was natural to turn to the efforts of psychologists who study satisfaction in its larger sense ([3], [7], [26], [27]). The literature generally agreed that satisfaction in a given situation is the sum of one's feelings or attitudes toward a variety of factors affecting that situation. Wanous and Lawler [33] proposed variations on two basic models for measuring satisfaction. The applicable definition of satisfaction is the sum of the user's weighted reactions to a set of factors,
n

SI=
j=1

RYjW

(1)

where R= The reaction to factorj by individual i. W= The importance of factorj to individual i. This model suggests that satisfaction is the sum of one's positive and negative reactions to a set of factors. An individual's feeling must, in this model, be placed somewhere between a "most negative" reaction and a "most positive" reaction. Implementation of the model centers on two different requirements. First, the set of factors comprising the domain of satisfaction must be identified. Second, a vehicle for scaling an individual's reaction to those factors must be found. 3. Factors Affecting User Satisfaction

There are clearly a number of factors affecting computer user satisfaction. An initial list of factors was established via a review of 22 studies of the computer/user interface ([1], [2], [4]-[6], [8], [10]-[13], [15], [16], [18]-[21], [24], [25], [28]-[31]). From them, 36 distinct factors were identified. Once the list was generated, tests were run as to completeness and accuracy. First, a set of three data processing professionals were asked to review the list. They recommended the addition of two factors. The expanded list of factors along with their definitions appears in the Appendix of this paper. This expanded list was then empirically compared to interview responses from 32 middle manager users in 8 different organizations. The interviews were constructed to encourage reflection on past and present relations with computer products and services. Comments concerning the respondent's attitude toward these relations were taped. A

532

JAMES E. BAILEY AND SAMMY W. PEARSON

critical incident analysis technique was then pretested for repeatability and applied to the tapes. The intent was to examine the completeness of the expanded list. The list would be assumed complete if, at an a = 0.01, any factor mentioned in an interview appeared on the list with probability of 0.90. The analysis resulted in 638 mentions of factors of which 625 could be placed on the list. Using the normal approximation to the binomial we have Pr(x > 625) = Pr(z > 6.73) t 0.0. Thus completeness was easily established. In fact, at the a = 0.01 level, the results suggested a 0.99 probability that a mentioned factor was on the list. Therefore, it was concluded that the expanded list constituted a complete domain for measurement. Shortly after each interview, the respondent was shown a list of the factors they had mentioned. They were then asked to rank order the factors as to importance relative to their own satisfaction. The minimum, average and maximum rankings for each factor are shown in Table 1. Factors are listed in Table 1 according to the average ranking and thus suggest their perceived order of importance. These rankings strongly suggest that individuals differ in the factors which affect their perception of satisfaction. 18 of
TABLE 1 Self-Assessed Rankings of Factor Importance Factor Flexibility Accuracy Timeliness Reliability Completeness Confidence in systems Relevancy Precision Technical competence of the EDP staff Currency Priorities determination Error recovery Response/turnaround time Convenience of access Attitude of the EDP staff Time required for new development Perceived utility Documentation Feeling of participation Processing of change requests Communication with the EDP staff Relationship with the EDP staff Understanding of systems Degree of training Job effects Top management involvement Feeling of control Schedule of products and services Format of output Mode of interface Security of data Expectations Organizational position of the EDP function Volume of output Language Charge-back method of payment for services Organizational competition with the EDP unit Vendor support Minimum 2 1 1 1 1 1 1 1 1 1 2 3 1 1 2 1 2 4 1 5 1 1 1 5 1 1 2 9 3 8 8 7 4 6 10 14 11 14 Average 5.8 6.1 6.3 6.4 6.9 6.9 7.2 8.0 8.2 8.5 9.0 9.0 9.1 9.1 9.7 10.0 10.7 10.8 10.8 11.3 11.3 11.5 11.5 12.3 12.3 12.4 12.7 13.8 14.1 15.0 15.0 15.0 15.2 16.6 17.3 18.0 19.5 19.8 Maximum 14 16 17 18 23 16 17 12 27 13 22 17 18 19 25 19 24 21 26 22 22 29 22 22 22 22 23 19 23 21 20 24 24 25 24 19 30 28

A TOOL FOR COMPUTER USER SATISFACTION

533

the 38 factors were ranked first at least once. Only nine of the factors failed to get into the "top five" for at least one user. Thus, the causes of satisfaction vary from user to user and any of the 38 factors could, conceivably, play a significant role in some user's satisfaction measure. At the end of the interview session, each respondent was asked to evaluate his overall sense of satisfaction with his present computer experiences. This was done to allow later comparison with satisfaction as measured by the research questionnaire. To accomplish this evaluation, the following seven-interval scale was employed.

Satisfied

Dissatisfaction

Of the 13 factors mentioned in the taped interviews and failing to appear on the list of 38 factors, one was brought up four times. Because of the frequency of mention, that factor, "integration of the system," was added to the list making a total of 39 factors. A Model for Measurement The next step was to identify a vehicle for measuring the user's reactions to various factors. Once again, the motivation for the vehicle was found in the psychology literature. As defined earlier, satisfaction is a bi-dimensional attitude affected by a variety of factors. The dimensionality and intensity of an individual's reaction to a factor must be measured. That measurement will, in essence, be an evaluation of his/her reaction to the factor relative to the perceived information requirements. The semantic differential technique was developed by Osgood, Suci and Tannenbaum to measure the "meaning" of things [23]. The technique is based on the use of adjectives to describe the characteristics of concepts and objects. Since people use adjectives to explain their perception of things, adjectives can be used to measure those perceptions. Measurement of one's perception involves the rating of four bipolar adjective pairs ranging from a negative to a positive feeling. For example, the meaning of "format of output" could be measured between the pairs; good vs bad, simple vs complex, readable vs unreadable and useful vs useless. The evaluation of one's feelings relative to any given adjective pair is accomplished, according to Osgood et al., via a seven interval scale. The seven intervals from negative to positive were denoted by the adverbial qualifiers; extremely, quite, slightly, neither/equally, lightly, quite and extremely. Figure 1 illustrates the semantic differential technique for measuring reaction to the "Degree of EDP training" factor.
Degree of EDP trainingprovided to users: The amount of specialized instruction and practice that is afforded to the user to increase the user's proficiency in utilizing the computer capability that is available.
Complete L sufficientL I 1 I
I I l
I

I
I

I
a
I I

I
I
I I

high superior satisfactory To me, this factor is important

l
i I Ie

incomplete I insufficient Xlow

I X E

inferior

JI unsatisfactory I unimportant

FIGURE 1.

Illustration of Questionnaire Form.

534

JAMES E. BAILEY AND SAMMY W. PEARSON

Two additional scales were added to those assigned to each factor. The first scale was the adjective pair, satisfactory-unsatisfactory. This was done to test the internal consistency of the other four pairs and thus the internal validity of the instrument. The second addition was the pair, important-unimportant. This was done to measure the weight given to the factor as required by equation (1). The scaling of the seven intervals was quantified by assigning the values -3, -2, - 1, 0, 1, 2 and 3 to the intervals. The importance scale was assigned values from 0.10 to 1.00 with steps of 0.15, the value 0.10 being associated with extremely unimportant and 1.00 with extremely important. Using these numbers, the reaction of an individual to a given factor is the average of the four assigned values: Rij = 4
Ii, j,k =

Ii, j,k

where

(2)

the numeric response of user i to adjective pair k of factor j, -2,-1,0,1,2 or 3.

=-3,

Thus, Ri can take on values from -3 to + 3 in increments of 0.25. Summing the individual weighted factor responses, one gets the overall satisfaction for the user:
39
=

4 1

(3)

The range of Si is from + 117 to -117 (e.g. ?3 x 39) with increments of 0.0375 (e.g. 0.25 x 0.15). The perceived satisfaction as measured by equation (3) can be deceiving. The problem occurs because a given individual may have no reaction to one or more factors. Suppose a user evaluated 20 of the 39 factors as highly satisfactory (e.g. +3) with an extreme importance (e.g. 1.00) and evaluated the other 19 factors as neutrally satisfactory (e.g. 0) and unimportant. Then the perceived overall satisfaction score would be 60, approximately half way between 0 and 117. This user can only be viewed as highly satisfied, yet his score suggests only a moderate rating. To overcome this problem, the score can be normalized to ? 1.00. The normalized score is based only on factors with at least one nonzero response in the first four adjective pairs. Factors evaluated with only zero responses are omitted as not meaningful. The normalized score for a user is equal to the actual score divided by the maximum possible. The maximum possible score is given as the number of factors receiving at least one nonzero score multiplied by 3.0. That is: NSi = Si/(Fi x 3.0) where NSi = Normalized satisfaction for user i, Fi = Number of meaningful factors
39

(4)

=E 8j, j=i aij


=

where
4

1 0

if

h= 1

E IJk > 0?0,

otherwise. Thus the normalized score ranges form - 1 to + 1. One might translate this normalized score as shown in Table 2.

A TOOL FOR COMPUTER USER SATISFACTION TABLE 2 Score boundaries for normalizeduser satisfaction Normalized score + 1.00 + 0.67 + 0.33 0.0 - 0.33 - 0.67 - 1.00 Translation Maximally satisfied quite satisfied slightly satisfied neither satisfied nor dissatisfied slightly dissatisfied quite dissatisfied maximally dissatisfied

535

4.

Evaluationof the Questionnaire

The next step in the development of the questionnaire was an empirical test of its validity and reliability. The questionnaire was constructed into an 81 2 >X51 2 booklet form with two factors per page. Appropriate instructions were added along with an attractive cover. The 32 middle managers previously interviewed were then asked to fill out the questionnaire. These people were selected because of the desire to compare the questionnaire score to the personal assessment attained at the conclusion of the earlier interview. The time lapse between the interview and the questionnaire evaluation was 4 to 6 weeks. 29 of the 32 interview respondents completed and returned the questionnaire. The time to complete the questionnaire ranged for 15 to 25 minutes. Reliability The 29 returned questionnaires and their corresponding self-assessment scores were used to examine the reliability and validity of the measurement questionnaire. Reliability is defined as the absence of measurement error. A reliable instrument will measure the same object with consistent and error free results. In the research reported here, an exact measure of error was not available. Therefore, error had to be statistically estimated. Assuming factor responses Rii to be independent and normally distributed, an analysis of variance was used to estimate measurement errors. The total variance was composed of components due to differences between each adjective pair, differences between each subject and measurement error. A reliability coefficient was calculated using [32]: rj = I( VeJ j/
VsUb,])

where

(5)

rj = Reliability of measure for factor]. Vsub, = Variance due to accountable differences in subjects. Ve j = Remaining residual variance due to measurement error. Reliability of the satisfaction questionnaire was calculated for each factor. The reliability coefficients obtained were very high. Of the 39 factors, 32 resulted in a coefficient greater than 0.90. The average coefficient was 0.93 and the minimum was 0.75. Thus, very little of the variance in responses was due to measurement error. Thus, it can be argued, the questionnaire is a reliable instrument. Validity Validity is defined as the extent to which the measurement instrument measured what it is supposed to measure. Traditionally, three different categories of validity are examined. They are content validity, predictive validity and construct validity. The satisfaction questionnaire was examined for each of these categories. Content validity implies that all aspects of the attribute being measured are considered by the instrument. Thus the measurement is complete and sound. The methodology used to develop the factor list and the result of the critical incident analysis suggest

536

JAMES E. BAILEY AND SAMMY W. PEARSON

strong content validity. Even though satisfaction for different users is influenced by different factors, the evidence suggests a very high probability that any influencing factor is included in the questionnaire. In addition, a product moment correlation coefficient was calculated for each adjective pair combination. Scales which purport to measure the same attribute should be positively correlated. A student-t distribution was used to test the significance of the resulting coefficients. All but 1 of the 234 coefficients were significant at the 0.05 level and that pair of adjectives was contained in the factor, "organization competition with EDP," which was ranked next to last in importance by the test community. Thus, it is concluded that the questionnaire is internally homogeneous. Finally, the test results were examined to see if the questionnaire could discriminate between satisfied and dissatisfied responses. For each factor, the responses were separated to identify those individuals who responded negatively (dissatisfied) and positively (satisfied). The averages of the responses for each group were calculated. The differences in averages were then examined. In 97 of the 156 adjective pairs, the differences were greater than 3 intervals. The minimum difference was 1.67 intervals. Therefore, the scales were readily able to discriminate between satisfied and dissatisfied respondents. Predictive or external validity implies that the instrument is consistent and agrees with other independent measures. Thus it can be used to measure or predict outside the bounds of the research experiments. The establishment of predictive validity is usually accomplished via a comparison to another "established" measure of the same attribute. The difficulty is often one of finding an independent established measure. To examine the predictive validity, the respondents were asked to self-evaluate their perceived satisfaction during the interview process. The resulting self-assessment and questionnaire measures of satisfaction are shown in Table 3. The results are sorted by measured satisfaction. The letter in the subject identification represents organizational affiliation. The correlation between these two sets of figures is 0.79 which is high, considering the fact that the self-assessment score could only take on one of seven values. A second indication of predictive validity was provided by the fifth adjective pair; satisfactory vs dissatisfactory. The idea was to compare on an individual factor basis this self-assessment score with the measured factor score. The independence of these two measures is subject to question because the data were collected at the same time and in the same manner. The resulting correlation coefficients ranged from 0.97 to 0.75 with an average coefficient of 0.91. The factor with the lowest correlation was once again the second least important factor. The conclusion based on these two tests was that the questionnaire did predict self-assessed satisfaction very well. Construct validity implies that the measurement instrument performs as expected relative to the construct of the attribute being measured. In the context of user satisfaction, construct validity is established if those factors which are important to perceived satisfaction are important in the measurement questionnaire. The factor's importance scale was averaged and the results rank ordered. These rankings and the average self-assessed rankings indicated in Table 1 were compared. The Spearman Rank Correlation Coefficient was 0.743. The five most important and five least important are listed in Table 4. There is an intuitive appeal to the importance attributed to these factors. The most important factors all tend to indicate the utility of the services provided. The factors most frequently causing dissatisfaction were also revealing. These factors were: (1) time required for new developments, (2) processing of change requests, (3) flexibility, (4) integration of the system, (5) degree of training and (6) top management

A TOOL FOR COMPUTER USER SATISFACTION TABLE 3 Self-Assessed and Measured Satisfactionfor 29 Test Subjects Subject Al E4 A3 El HI D3 E2 H4 H3 D1 A2 Cl A4 H2 E3 D4 GI
C3

537

Self-Assessed Satisfaction + 3 +3 + 3 +2 +2 + 2 +2 +2 +2 + 1 +2 +2 +2 -2 + 2 + 1 +2
- 1

Measured Satisfaction 111.75 83.00 69.25 63.75 60.75 60.25 57.75 56.25 55.25 54.25 49.75 48.50 41.50 33.75 31.75 31.25 21.50
20.00

G2 G3 BI C4 C5 F4 G4 F3 Fl F2 F5

+ 1 + 1 +2 -2 -1 +1 +2 -2 -2 -2 -3

18.50 17.75 15.25 -4.75 -7.00 -15.50 -15.75 -27.75 -38.00 -46.75 -64.25

TABLE 4 Five Most Importantand Five Least Impor-tant Factors Most Important 1. Accuracy 2. Reliability 3. Timeliness 4. Relevancy 5. Confidence in System Least Important 1. Feeling of control 2. Volume of output 3. Vendor support 4. Degree of training 5. Organizational position of EDP

involvement. This list of factors agrees very well with results published elsewhere ([5], [12], [15], [17], [18], [21], [28]). Although no statistical measure of construct validity was available, there is significant intuitive evidence to support a positive contention. The fact that unexpected results did not occur is evidence that the measurement questionnaire does reflect the true user satisfaction construct. Application of the Questionnaire We can therefore argue that the user satisfaction questionnaire and its attendant model form a reliable and valid measurement instrument. The next issue is its utility in a design and analysis. Deese [9] reported on experiences with the questionnaire.

538

JAMES E. BAILEY AND SAMMY W. PEARSON

Satisfaction was measured in a centrally administered five site organization. The attitudes of 15 user managers and 35 terminal users at each site were examined. Deese concluded that the questionnaire was very useful. The time needed to administer the survey, analyze and document the results was less than one week per installation. Deese claimed, "The results identified problems that would not otherwise have been discovered." In addition, Deese discussed three difficulties with the questionnaire. These issues speak to the application of the questionnaire rather than its development. The first difficulty was with user concern for anonymity. The test community was organized and a meeting with union leaders was needed to convince them that the questionnaire posed no threat. As with any measurement instrument, there is a potential for concern that the results could be used against the respondent. It is important to explain that the results are intended to identify ways to improve computer services and not to identify dissatisfied users. The questionnaire must be used in an atmosphere of user anonymity. A second difficulty dealt with the applicability and clarity of some of the questions. The entire set of 39 factors were unnecessary for the Deese study. Ten factors were omitted from the questionnaire given to managers and 21 were omitted from the terminal users' questionnaire. These omissions amounted to a preselection of the factors of interest in a specific situation. In addition, the questions were made clearer by couching them in vocabulary specific to the user community. As a research tool, the questionnaire should remain a list of 39 general factors. Taken as a whole, the list forms a relatively complete definition of computer user satisfaction. In addition, the universe of application would be narrowed if the list were arbitrarily shortened. For specific applications, however, it is reasonable to remove irrelevant factors and redefine the factors in situation specific terms. The last difficulty, identified by Deese, dealt with time. Some respondents were confused as to whether their answers were to reflect present conditions or an aggregate of past conditions. It should be made clear that the questionnaire is a snapshot of present conditions. The Deese experience indicated the utility of the technique as a systems analysis tool. The results pointed out problems with the existing computer system that could then be corrected. The experience also indicated the need for clarity in application so as to avoid biased responses. 5. Conclusions

The efforts reported in this paper offer several contributions to both the research and practitioner communities. First, a definition of computer user satisfaction has been developed. The definitions consist of the weighted sum of a user's positive or negative reaction to a set of 39 factors. It was shown that the importance of any factor differs over the universe of computer users. In one test, no factor was judged less than 14th most important by at least one respondent. A second contribution was the translation of the satisfaction definition into a valid measurement instrument. The measure is based on the semantic differential of four adjective pairs which describe the factor. The relative importance of the factor is based on a separate fifth reaction. A variety of statistical tests were presented to show the validity and reliability of the questionnaire. Thus, it is concluded that computer user satisfaction as defined can be measured. Application experiences using the technique were discussed. The execution of the entire questionnaire required between 15 and 25 minutes per user and it took one week to study, analyze and report on an entire installation. In specific situations, it is

A TOOL FOR COMPUTER USER SATISFACTION

539

possible to preselect a subset of the 39 factors to reduce the size of the questionnaire. Experience indicates the need to insure the user community that their candid response will not be used against them. Further the modification of the factor titles and definitions to use language specific to the installation is helpful. Finally, it is helpful when giving instructions to indicate that one's response should be relative to the present experience and not reflect historical experiences. Conditioned by these concerns, the experience concluded that the concept does identify problems that would otherwise not have been identified. Further research is needed in the development of the measurement tool proposed here. Additional validation efforts are needed in wide varieties of user environments. Factor analysis needs to be applied to see if and when the set of factors can be reduced. The instrument should be used to establish average levels of satisfaction in different situations. Closely controlled studies are needed to test the relationship between satisfaction and bottom line indicators of user and organization performance. Finally, studies are needed to explore the use of the satisfaction measure as a tool for improving systems design. Some of these studies are presently under way. With these and other efforts, the proposed tool promises to be a valuable addition to information systems research and application.

Appendix. Factors, Their Definitions and Adjective Pairs


1. Top management involvement: The positive or negative degree of interest, enthusiasm, support, or participation of any management level above the user's own level toward computer-based information systems or services or toward the computer staff which supports them. strong vs weak consistent vs inconsistent good vs bad significant vs insignificant 2. Organizationalcompetitionwith the EDP unit: The contention between the respondent's organizational unit and the EDP unit when vying for organizational resources or for responsibility for success or failure of computer-based information systems or services of interest to both parties. productive vs destructive rational vs emotional low vs high harmonious vs dissonant 3. Priorities determination:Policies and procedures which establish precedence for the allocation of EDP resources and services between different organizational units and their requests. fair vs unfair consistent vs inconsistent just vs unjust precise vs vague 4. Charge-backmethod of payment for services: The schedule of charges and the procedures for assessing users on a pro rata basis for the EDP resources and services that they utilize. just vs unjust reasonable vs unreasonable consistent vs inconsistent known vs unknown 5. Relationship with the EDP staff: The manner and methods of interaction, conduct, and association between the user and the EDP staff. harmonious vs dissonant good vs bad cooperative vs uncooperative candid vs deceitful

540 6.

JAMES E. BAILEY AND SAMMY W. PEARSON Communicationwith the EDP staff: The manner and methods of information exchange between the user and the EDP staff. harmonious vs dissonant productive vs destructive precise vs vague meaningful vs meaningless

7.

Technical competenceof the EDP staff: The computer technology skills and expertise exhibited by the EDP staff. current vs obsolete sufficient vs insufficient superior vs inferior high vs low

8. Attitude of the EDP staff: The willingness and commitment of the EDP staff to subjugate external, professional goals in favor of organizationally directed goals and tasks. user-oriented vs self-centered cooperative vs belligerent courteous vs discourteous positive vs negative 9. Schedule of products and services: The EDP center timetable for production of information system outputs and for provision of computer-based services. good vs bad regular vs irregular reasonable vs unreasonable acceptable vs unacceptable 10. Time requiredfor new development:The elapsed time between the user's request for new applications and the design, development, and/or implementation of the application systems by the EDP staff. short vs long dependable vs undependable reasonable vs unreasonable acceptable vs unacceptable 11. Processing of change requests: The manner, method, and required time with which the EDP staff responds to user requests for changes in existing computer-based information systems or services. fast vs slow timely vs untimely simple vs complex flexible vs rigid 12. Vendorsupport:The type and quality of the service rendered by a vendor, either directly or indirectly, to the user to maintain the hardware or software required by that organizational status. skilled vs bungling sufficient vs insufficient eager vs indifferent consistent vs inconsistent 13. Response! turnaround time: The elapsed time between a user-initiated request for service or action and a reply to that request. Response time generally refers to the elapsed time for terminal type request or entry. Turnaround time generally refers to the elapsed time for execution of a program submitted or requested by a user and the return of the output to that user. fast vs slow good vs bad consistent vs inconsistent reasonable vs unreasonable 14. Means of input/output with EDP center: The method and medium by which a user inputs data to and receives output from the EDP center. convenient vs inconvenient clear vs hazy efficient vs inefficient organized vs disorganized

A TOOL FOR COMPUTER USER SATISFACTION

541

15. Convenienceof access: the ease or difficulty with which the user may act to utilize the capability of the computer system. convenient vs inconvenient good vs bad easy vs difficult efficient vs inefficient 16. Accuracy: The correctness of the output information. accurate vs inaccurate high vs low consistent vs inconsistent sufficient vs insufficient 17. Timeliness: The availability of the output information at a time suitable for its use. timely vs untimely reasonable vs unreasonable consistent vs inconsistent punctual vs tardy 18. Precision: The variability of the output information from that which it purports to measure. sufficient vs insufficient consistent vs inconsistent high vs low definite vs uncertain 19. Reliability: The consistency and dependability of the output information. consistent vs inconsistent high vs low superior vs inferior sufficient vs insufficient 20. Currency:The age of the output information. good vs bad timely vs untimely adequate vs inadequate reasonable vs unreasonable 21. Completeness:The comprehensiveness of the output information content. complete vs incomplete consistent vs inconsistent sufficient vs insufficient adequate vs inadequate 22. Format of output: The material design of the layout and display of the output contents. good vs bad simple vs complex readable vs unreadable useful vs useless 23. Language: The set of vocabulary, syntax, and grammatical rules used to interact with the computer systems. simple vs complex powerful vs weak easy vs difficult easy-to-use vs hard-to-use 24. Volumeof output:The amount of information conveyed to a user from computer-based systems. This is expressed not only by the number of reports or outputs but also by the voluminousness of the output contents. concise vs redundant sufficient vs insufficient necessary vs unnecessary reasonable vs unreasonable

542

JAMES E. BAILEY AND SAMMY W. PEARSON

25. Relevancy: The degree of congruence between what the user wants or requires and what is provided by the information products and services. useful vs useless relevant vs irrelevant clear vs hazy good vs bad 26. Error recovery: The methods and policies governing correction and rerun of system outputs that are incorrect. fast vs slow superior vs inferior complete vs incomplete simple vs complex 27. Security of data: The safeguarding of data from misappropriation or unauthorized alteration or loss. secure vs insecure good vs bad definite vs uncertain complete vs incomplete 28. Documentation:The recorded description of an information system. This includes formal instructions for the utilization of the system. clear vs hazy available vs unavailable complete vs incomplete current vs obsolete 29. Expectations: The set of attributes or features of the computer-based information products or services that a user considers reasonable and due from the computer-based information support rendered within his organization. pleased vs displeased high vs low definite vs uncertain optimistic vs pessimistic 30. Understanding of systems: The degree of comprehension that a user possesses about the computer-based information systems or services that are provided. high vs low sufficient vs insufficient complete vs incomplete easy vs hard 31. Perceived utility: The user's judgment about the relative balance between the cost and the considered usefulness of the computer-based information products or services that are provided. The costs include any costs related to providing the resource, including money, time, manpower, and opportunity. The usefulness includes any benefits that the user believes to be derived from the support. high vs low positive vs negative sufficient vs insufficient useful vs useless 32. Confidencein the systems: The user's feelings of assurance or certainty about the systems provided. high vs low strong vs weak definite vs uncertain good vs bad 33. Feeling of participation: The degree of involvement and commitment which the user shares with the EDP staff and others toward the functioning of the computer-based information sytems and services. positive vs negative encouraged vs repelled sufficient vs insufficient involved vs uninvolved

A TOOL FOR COMPUTER USER SATISFACTION

543

34. Feeling of control: The user's awareness of the personal power or lack of power to regulate, direct or dominate the development, alteration, and /or execution of the computer-based information systems or services which serve the user's perceived function. high vs low sufficient vs insufficient precise vs vague strong vs weak 35. Degree of training: The amount of specialized instruction and practice that is afforded to the user to increase the user's proficiency in utilizing the computer capability that is unavailable. complete vs incomplete sufficient vs insufficient high vs low superior vs inferior 36. Job effects: The changes in job freedom and job performance that are ascertained by the user as resulting from modifications induced by the computer-based information systems and services. liberating vs inhibiting significant vs insignificant good vs bad valuable vs worthless 37. OrganizationalPosition of the EDP Function: The hierarchical relationship of the EDP function to the overall organizational structure. appropriate vs inappropriate strong vs weak clear vs hazy progressive vs regressive 38. Flexibility of Systems: The capacity of the information system to change or to adjust in response to new conditions, demands, or circumstances. flexible vs rigid versatile vs limited sufficient vs insufficient high vs low 39. Integrationof systems: The ability of systems to communicate/transmit data between systems servicing different functional areas. complete vs incomplete sufficient vs insufficient successful vs unsuccessful good vs bad'

'Research reported here was conducted by Dr. Pearson as part of his dissertation requirements. He has copyrighted the measurement instrument. Use of the instrument for other than research purposes should be preceded by permission from him.

References
1. 2. 3. 4. 5. 6.
ADAMS,

C. R., "How Management Users View Information Systems," Decision Sci., Vol. 6, No. 2 (April 1975), pp. 337-345. AND SCHROEDER, R. G., "Managers and MIS: They Get What They Want," Bus. Horizons, Vol. 16, No. 6 (December 1973), pp. 63-68. CHURCHILL, G. A., FORD, N. M. AND WALKER, 0. C., "Measuring the Job Satisfaction of Industrial Salesmen," J. Marketing Res., Vol. 11, No. 3 (August 1974), pp. 254-260. CHURCHILL, N., KEMPSTER, J. H. AND URETSKY, M., Computer-BasedInformationSystems for Management: A Survey, National Association of Accountants, New York, 1969. COLTON, K. W., "Computers and Policy: Patterns of Success and Failure," Sloan Management Rev., Vol. 14, No. 2 (Winter 1973), pp. 75-98. CONSTANT, D. L., "From MIS to MBC: An Investigation into Why One Corporation Eliminated Its Computer-Based Management Information Systems," Unpublished Master's thesis, Air Force Institute of Technology, 1973.

544 7. 8. 9. 10. 11. 12. 13. 14.


CROSS,

JAMES E. BAILEY AND SAMMY W. PEARSON D., "The Worker Opinion Survey: A Measure of Shop-Floor Satisfaction," OccupationalPsych., Vol. 47, No. 3-4 (1973), pp. 193-208. R. L., "How to Control the Computer Resource," Harvard Bus. Rev., Vol. JOHN AND NOLAN, DEARDEN, 51, No. 6 (November-December 1973), pp. 68-78. D., "Experiences Measure User Satisfaction," Proceedings of the ComputerMeasurement Group DEESE, of A CM, Dallas, December 1979. JOHN, "Bad Decisions on Computer Use," Harvard Bus. Rev., Vol. 47, No. 1 (JanuaryDIEBOLD, February 1969), pp. 14-176. DAVID,Computers,Management, and Information,George Allen and Unwin Ltd., London, FIRNBERG, 1973. W. H., "The Computer and Evolving Management Theory," Unpublished USAWC Research FITTS, Report, 1971. GUPTA, ROGER, "Information Manager: His Role in Corporate Management," Data Management, Vol. 12, No. 7 (July 1974), pp. 26-29. C. H., "Research on Productivity Measurement Systems for Administrative L. F. AND KRIEBEL, HANES, Services: Computing and Information Services," Vols. I and II, Westinghouse Electric Corporation, Research and Development Center, Pittsburgh, Pa., July 1978. CHARLES W., "Emerging EDP Pattern," Harvard Business Rev. Vol. 48, No. 2 (March-April HOFER, 1970), pp. 16-171. W. E., "Socio-Technical Aspects of MIS," J. Systems Management,Vol. 25, No. 2 (February HOLLAND, 1974), pp. 14-16. JANECEK, F. P., "A Comprehensive Analysis of Factors Affecting MIS Success or Failure and Suggested Failure Avoidance Procedures," Unpublished Masters Thesis, Arizona State University, December, 1978. LUCAS, H. C. JR., "User Reactions and the Management of Information Systems," Management Informat., Vol. 2, No. 4 (August 1973), pp. 165-172. , "A Descriptive Model of Information Systems in the Context of the Organization," Proceedings of the Wharton Conference on Research on Computers in Organizations, Vol. 5 (Winter 1973), pp. 27-39. , "An Empirical Study of a Framework for Information Systems," Decision Sci., Vol. 5 No. 1 (January 1974), pp. 102-114. INC.,"Unlocking the Computer's Profit Potential," Computersand AutomaANDCOMPANY, McKINSEY tion, Vol. 18 (April 1969), pp. 24-33. MORRIS, W. T., "The Development of Productivity Measurement Systems for Administrative Computing and Information Services," Ohio State University, 1978. C. E., "Studies on the Generality of Affective Meaning Systems," Amer. Psych., Vol. 17, No. OSGOOD, 1 (January 1962), pp. 10-28.

15. 16. 17.

18. 19.

20. 21. 22. 23.

24. POWERS, R. F. AND DICKSON, G. W., "MIS Project Management: Myths, Opinions, and Reality," California Management Rev., Vol. 15, No. 3 (Spring 1973), pp. 147-156. "Scoring DP Performance," Infosystems, Vol. 21, No. 9 (September 1974), pp. 25. SCHUSSEL, GEORGE, 59-62. L. L., "Theories of Performance and Satisfaction: A Review," Readings 26. SCHWAB,D. P. AND CUMMINGS, in OrganizationalBehaviorand Human Peiformance, W. E. Scott and L. L. Cummings (Eds.), Richard D. Irwin, Inc., Homewood, 1973, pp. 130-153. 27. SMITH, P. C., KENDALL, L. M. AND HULIN, C. L., The Measurement of Satisfaction in Work and Retirement:A Strategyfor the Study of Attitudes, Rand-McNally, Chicago, 1969. 28. STONE, M. M. AND TARNOWIESKI,D., Management Systems in the 1970's, American Management Association, Inc., New York, 1972. E. B., "Management Information Systems: Appreciation and Involvement," Management 29. SWANSON, Sci., Vol. 21, No. 2 (October 1974), pp. 178-188.
30. TEICHROEW,DANIEL, ED., "Education Related to the Use of Computers in Organization," Comm.

31. 32. 33. 34.


35.

ACM, Vol. 14, No. 9 (September 1971), pp. 573-588. E. A., "Job Enrichment and the Computer: A Neglected Subject," Computersand People, TOMESKI, Vol. 23, No. 11 (November 1974), pp. 7-11. R. E. ANDMYERS, R. H., Probability and Statistics for Engineers and Scientists, MacMillan WALPOLE, Company, New York, 1972. E. E., "Measurement and Meaning of Job Satisfaction," J. Appl. Psych., J. P. ANDLAWLER, WANOUS, Vol. 56, No. 2 (April 1972), pp. 95-105. J., A Behavioral Theoty of the Firm, Prentice-Hall, Inc., Englewood Cliffs, N.J., R. ANDMARCH, CYERT, 1963.
DEBONS, A., RAMAGE,W., AND ORIEN, J., "Effectiveness Model of Productivity" in L. F. Hanes and

A TOOL FOR COMPUTER USER SATISFACTION

545

36. 37. 38. 39.

C. H. Kriebel (Eds.), "Research on Productivity Measurement Systems for Administrative Services: Computing and Information Services," Vol. 2 (July 1978), NSF Grant APR-20546. EVANS, J., "Measures of Computer and Information Systems Productivity Key Informant Interviews," Tech Report APR-20546/TR-5, Westinghouse Res. Labs., Pittsburgh, Pa., October 1976. LUCAS, H. C., "Performance and the Use of an Information System," Management Sci., Vol. 21, No. 8 (April 1975), pp. 908-919. Vol. 31 NEUMANN, S. AND SEGEV, E., "Evaluate Your Information System," J. Systems Managem7?en?t, (March 1980). NOLAND, R. AND SEWARD, H., "Measuring User Satisfaction to Evaluate Information Systems," in R. L. Nolan (Ed.), Managing the Data Resource Function, West Publishing Co., Los Angeles, California, 1974.

You might also like