Professional Documents
Culture Documents
Systems
Presenter Symposium Submitted to the Academy of Management
List of potential sponsors: Human Resources Division and, Organizational Communication and
Information Systems Division
Chair:
Kimberly Lukaszewski
School of Business
State University of New York at New Paltz
1 Hawk Drive
New Paltz, NY 12561
lukaszek@newpaltz.edu
845-257-2661
Presenters:
Janet Marler
School of Business
University at Albany, State University of
New York
1400 Washington Avenue
Albany, NY 12222
marler@albany.edu
Sandra Fisher
School of Business
Clarkson University
375 Bertrand H. Snell Hall
Potsdam, NY 13699-5790
sfisher@clarkson.edu
Dianna L. Stone
Department of Management
College of Business
University of Texas San Antonio
San Antonio, TX 78249
dianna.stone@utsa.edu
Eugene F. Stone-Romero
Department of Management
College of Business
University of Texas San Antonio
San Antonio, TX 78249
Teresa Svacina
Department of Management
College of Business
University of Texas San Antonio
San Antonio, TX 78249
Teresa.Svacina@utsa.edu
Richard Johnson
School of Business
University at Albany, State University of
New York
1400 Washington Avenue
Albany, NY 12222
rjonhson@albany.edu
Regina Cosentino Yanson
School of Business
University at Albany, State University of
New York
1400 Washington Avenue
Albany, NY 12222
ryanson@albany.edu
James Dulebohn
School of Labor & Industrial Relations
Michigan State Univeristy
East Lansing, MI 48824
dulebohn@msu.edu
Discussion Leader
Michael J. Kavanagh, Professor Emeritus
and
President, Kavanagh Associates
125 Devon Road
Delmar, NY 12054
mickey.kavanagh@gmail.com
1
eugene.romero@utsa.edu
date, at least one model has been proposed (e.g., Stone, Stone-Romero, & Lukaszewski, 2006;
Stone & Lukaszewski, 2009), but it is clear that much more theoretical work is needed on the
topic. Second, given that eHRM systems involve the application of technology or information
systems to manage human resource processes, we believe that this new field would benefit from
the integration of existing knowledge in a number of related disciplines. For instance, researchers
might integrate and use relevant research in Management Information Systems (MIS),
Organizational Communication (OC), HR Strategy, and Industrial and Organizational
Psychology (I & O Psychology), and Education to gain a better understanding of eHRM
processes. In particular, there has been a great deal of research on the design and implementation
of information systems in the field of MIS (***), and it should certainly be relevant for research
eHRM systems. Likewise, there has been considerable research on computer-mediated
communication by Kiesler and her colleagues (e.g., Kiesler, Siegel, & McGuire, 1984, 1988)
which should help eHRM researchers gain a better understanding of the key role that
communication plays in the development of these new systems. Similarly, there has been some
research on computerized testing in the field of I & O Psychology (e.g., Naglieri et al, 2004;
Mead & Drasgow, 1993). This work should be useful for designing e-recruitment and e-selection
systems, and may help provide guidelines for the ethical use of eHRM systems in organizations.
Finally, research in the area of strategic human resource management should provide some
important insights about the impact that eHRM may have on the HR function in organizations
(e.g., Huselid, 1995; Lepak & Snell, 1998). For instance, it may enhance our understanding of
the key role that technology plays in aligning HR practices with an organizations strategy (Thite
& Kavanagh, 2009).
Third, as noted above, few empirical studies have actually examined the effectiveness or
acceptance of eHRM systems. Some notable exceptions include research on e-recruiting (e.g.,
Dineen, Ling, Ash, & DelVecchio, 2007; Stone, Lukaszewski, & Isenhour, 2005;), e-learning
(Brown, 2001; Salas, DeRouin, & Littrell, 2005), and e-selection (e.g., Bartram, 2006a; Potosky
& Bobko, 2004). However, there has been a paucity of research on e-performance management
(e.g., Payne, Horner, Boswell, Schroeder, Stine-Cheyne, 2009), e-compensation and benefits
(Dulebohn & Marler, 2005), and virtual systems (Hertel, Geister, & Konradt, 2005). As a result,
additional research is needed on all of these topics.
Furthermore, implementation is one of the greatest challenges associated with the success
of eHRM systems. The primary reason for this is that they represent an entirely new way of
managing human resources, and delivering services to employees and managers (Gueutal &
Falbe, 2005; Marler & Dulebohn, 2005). As a result, eHRM systems may change managerial
roles, employee skill requirements, HR processes, and the overall role of HR in organizations.
For example, in the past HR professionals were available to assist employees with their HR
decisions (e.g., retirement and benefit allocations), but eHRM systems have shifted many of
these important responsibilities to employees (Stone, Stone-Romero, & Lukaszewski, 2003).
Thus, research is needed to examine the factors affecting the successful implementation and
acceptance of these systems.
Fourth, the use of eHRM systems has also prompted concerns about a host of new ethical
and social issues in organizations. For example, these systems may have a greater potential to
invade individual privacy than traditional HR systems (Stone & Stone, 1990). The primary
reason for this is that personal data are now stored electronically, and can be easily released to
third parties. As a consequence, individuals may be more concerned about potential invasions of
privacy with electronic than traditional HR systems. Although some research has focused on
privacy issues (e.g., Eddy, Stone, & Stone-Romero, 1999: Lukaszewski, Stone & Stone-Romero,
2008; Pascal, Stone, & Stone-Romero, 2009), additional work is definitely needed on this topic.
Another critical concern associated with the use of these systems is the degree to which
they have an adverse impact on protected group members in our society (e.g., racial minorities,
women, people with disabilities, workers over 40 years of age) (Stone, Lukaszewski, & Isenhour,
2005). For example, some minority group members (e.g., Hispanic-Americans, AfricanAmericans, older individuals) may have less access to computers, and lower levels of computer
skills than those in the majority group (e.g., European-Americans) (Kuhn & Skuterud, 2000;
McManus & Ferguson, 2005). As a result, they may be less likely to use electronic systems to
apply for jobs or benefits than majority group members. Consequently, some eHRM systems
may have the potential to have an adverse impact on these individuals, and will undoubtedly be
subject to legal scrutiny. It merits emphasis that the new electronic HR processes come under
the same legal guidelines (e.g., Civil Rights Acts, Age Discrimination in Employment Act, or
Americans with Disabilities Act) as any other HR practice. As a result, these important issues
need to be the subject of future research on eHRM.
Given the widespread use of eHRM systems in organizations, and the paucity of
theoretical and empirical research on the topic, the primary purposes of the proposed symposium
are to (a) consider some emerging issues in eHRM research, (b) present the results of recent
research on the relation between eHRM and strategic human resource management, e-selection,
e-learning and eHRM system implementation, and (c) offer direction for future research and
practice on the topic. We will also involve audience members in an interactive discussion on the
strategies that might be used to advance our understanding of the effectiveness and acceptance of
eHRM processes.
10
and
Teresa Svacina
University of Texas at San Antonio
In recent years large organizations have increasingly used computerized technology to
assist with the assessment and selection of job applicants. This process has been labeled
electronic selection (e-selection), and some evidence shows it can reduce hire time, decrease
costs, provide rapid feedback to applicants, and help organizations track system effectiveness
(Aguinis, Henle, & Beaty, 2001; Bartram, 2006b; Gueutal & Stone, 2005; Stone, Isenhour &
Lukaszewski, 2005). Although there are many advantages of using electronic systems,
researchers have argued that they also pose a number of challenges. For instance, relative to
traditional selection procedures, e-selection may (a) be more susceptible to cheating, (b) produce
results that are less reliable and valid, and (c) result in more negative applicant reactions (e.g.,
Bartram, 2006a; Chapman, Uggerslev & Webster, 2003; Harris, 2006). Given the widespread use
of e-selection systems, relatively little research has examined their acceptance and effectiveness.
Thus, the primary purposes of the proposed presentation are to (a) review the existing literature
on e-selection, (b) consider the implications of these results for future research, practice, society,
and (c) offer suggestions for future research on the issue.
The selection process in organizations often involves the assessment of applicants
knowledge, skills, and abilities (KSAs) in order to determine their suitability for jobs. In
particular, on the basis of essential job elements (i.e., job analysis) samples of behavior or tests
are identified that can be used to select those applicants who are most qualified for jobs. There
are typically four steps in this process and technology is now being used at each stage. Each of
these steps will be considered in the paragraphs that follow.
12
Step 1
In the first step, organizations conduct a job analysis to determine the essential functions
of jobs and the individual KSAs needed to perform the job. Even though organizations are using
online job analyses, little research has focused on the effectiveness of these systems (Aguinis,
Mazurkiewicz, & Heggested, 2009; Reiter-Polmon, Brown, Sandall, Bubotz, & Nimps, 2006).
Likewise, to our knowledge research has not examined the extent to which organizations actually
use job analysis to design and select electronic assessment methods. For example, companies
now use software (e.g., Resumix) to conduct initial screening, and scan resumes for key words to
assess whether applicants are qualified for jobs. However, some researchers (e.g., Mohamed,
Orife, & Wibowo, 2002) have questioned whether the criteria used in key word systems are jobrelated or based on job analysis. As a consequence, research is needed to assess these important
questions.
Step 2
At the second stage, organizations select one or more tests (e.g., cognitive ability tests,
interviews, application blanks, personality inventories) that can be used to assess applicants
KSAs. For example, apart from paper and pencil testing systems organizations are now choosing
a variety of electronic selection methods including (a) key word screening systems, (b)
interactive voice response (IVR) systems, (c) computer-based testing, (d) video interviews, or (e)
audio administered personality inventories. Interestingly, we know of no research that has
assessed how organizations choose e-selection methods, but some evidence shows that cost
containment and convenience may be one of the primary factors (Buckley, Minette, Joy &
Michaels, 2004). However, we believe that the emphasis on cost containment and convenience
may blur the actual goals of the assessment process. As a result, those selecting tests must
13
consider not only costs, but the psychometric quality of tests as well (e.g., reliability and
validity) (Naglieri et al., 2004). It merits noting that extant professional and legal guidelines (i.e.,
Principles for the Validation and Use of Personnel Selection Procedures, Society of Industrial
and Organizational Psychology, 2009) require that organizations consider the validity of
inferences made from test scores regardless of type of administration (Naglieri et al., 2004).
Thus, research is needed to determine whether organizations consider test validity along with the
cost, convenience, and innovativeness when choosing electronic selection methods.
Step 3
In the third step of this process, tests are administered to job applicants. As might be
expected, technology has had a profound impact on this phase of the selection process. Thus,
considerable research has examined the equivalence and psychometric properties of e-selection
compared to traditional methods (e.g., Burke & Normand, 1987; Butcher, Perry & Atlis, 2000;
Mead & Drasgow, 1993). However, the results of this research have been mixed. Some studies
have shown that scores on web-based and paper and pencil ability tests or personality inventories
are equivalent (Jones, Brasher & Huff, 2002; Meade, Michaels & Lautenshlager, 2007; Salgado
& Moscuso, 2003). Others have shown that web-based test scores may not be equivalent to those
on paper and pencil tests (Potosky & Bobko, 2004; Coyne, Warszta, Beadle, & Sheehan, 2005).
For example, results of two studies revealed that scores on web-based tests were lower than those
on paper and pencil tests (Potosky & Bobko, 2004; Coyne et al., 2005). One reason for the
difference in test scores is that individuals computer skills and abilities may affect applicants
scores or performance on e-selection methods. For instance, one critical research question is
whether online assessment actually assesses individuals knowledge, skills or abilities or their
ability to use computers or other forms of technology. For example, minority group members and
14
older individuals have been among the last to connect to the Internet because of economic access
and lack of training (Naglieri et al., 2000). Thus, research is needed to determine if individual
differences in computer self-efficacy, ethnicity, gender, and age influence test scores.
Step 4
In the fourth step of the selection process, organizations are concerned with determining
the psychometric properties of tests. Results of one study on this topic found that web-based tests
had more favorable psychometric properties than paper and pencil tests (Ployhart, Weekly, Hotz,
& Kemp, 2003). In addition, research has shown that e-selection methods did not have an
adverse impact on members of protected groups (Hattrup, OConnell, & Yaer, 2006). However,
as noted above, we believe that additional research is needed to determine if electronic selection
systems have an adverse impact on members of protected groups (e.g., older workers, women,
and racial or ethnic minorities). Similarly, there are some key problems associated with
electronic assessment that may affect the reliability and validity of inferences made from these
types of data. For instance, research is needed to examine strategies for overcoming problems
associated with (a) test integrity or cheating, (b) test security, and (c) faking. Likewise, online
systems pose a number of potential threats to personal privacy because data can be easily
released to third parties. Thus, additional research is needed to examine the degree to which
applicants perceive that these systems pose a threat to personal privacy.
Apart from these issues, researchers have also assessed applicants acceptance and
reactions to electronic selection systems. Not surprisingly, results of this research have been
mixed. For instance, some studies have shown that electronic methods (e.g., interactive voice
response systems, computer assisted testing) are viewed as more fair than traditional methods
(Bauer, Truxillo, Paronto, Weekley, & Campion, 2004; Epstein & Klinkenberg, 2001). However,
15
other studies (Richman-Hirsch, Olson-Buchanan, & Drasgow, 2000) revealed that managers
completing multimedia assessment had more positive attitudes toward it than those completing
computerized or paper and pencil assessment. Likewise, a study by Chapman et al. (2003)
indicated that face-to face interviews are viewed more favorably than electronic interviews.
These differences in findings have led some researchers to contend that individual differences in
age, gender and ethnicity may affect reactions to electronic selection methods (Harris, 2006;
Stone et al., 2003).
In summary, research has begun to consider the effectiveness and acceptance of eselection systems, but relatively few studies have focused on several important issues associated
with it. In addition, the findings of these studies have been inconsistent. Thus, our proposed
presentation will consider some potential reasons for these inconsistent findings, and offer
directions for future research and practice on the topic.
training needs. Given this flexibility, the market for e-learning is predicted to surpass $50 billion
by 2010 (Jones, 2007) and four million university annually participate in at least one online
course each year (Allen & Seaman, 2008). In addition, organizations are using e-learning to save
millions of dollars annually on training costs (Salas, DeRouin, & Littrell, 2005) and we believe
that the financial pressures faced by both organizations and educational institutions will
accelerate the trend to e-learning in organizations.
Despite the global scope of e-learning, these initiatives have several shortcomings.
Trainees involved in e-learning feel isolated and distanced from others in the training and they
find it hard to remain engaged in the e-learning process (Salas et al., 2005; Welsh, Wanberg,
Brown, & Simmering, 2003). In addition, when participants find that the technology does not
support their preferred learning method, their experiences and learning is reduced (Hornik,
Johnson, & Wu, 2007). For these reasons, e-learning attrition rates have approached 40 percent
(Levy, 2007).
In light of these shortcomings, researchers from multiple disciplines have sought to
better understand how to improve e-learning. Specifically, researchers in human resources
management (HR), and management information systems (MIS), Industrial Psychology,
organizational communication, and education have each begun to develop their own research
questions and agendas. Separately, each of these disciplines have advanced our knowledge of elearning, but much of the research in each of these disciplines has been done in isolation from
other disciplines, addressing different research questions and often using different research
approaches.
For example, our initial review of the research suggests that research in HR and Industrial
Psychology typically focuses on extending the long tradition of training research by examining
17
motivational and trainee differences and their impact on e-learning outcomes (cf. Brown, 2001;
Klein, Noe, & Wang, 2006). Within the MIS discipline, research has tended to focus on how
technology can be used to support training processes and outcomes (cf. Alavi, Yoo, & Vogel,
1997; Johnson et al., 2008). and have taken a lead in the development of models of e-learning
effectiveness (Alavi, Marakas, & Yoo, 2002). Fourth, research in computer-mediated
communication by Kiesler and her associates (Kiesler, Siegel, & McGuire, 1984, 1988) has
examined factors affecting the effectiveness of information presented through this medium.
Finally, education researchers have focused on pedagogical and learning differences between
online and face-to-face instruction or on course design and peer interaction (Allan & Lawless,
2005; Cybinski & Selvanathan, 2005). Many of these studies have used individual course case
studies, where instructors share their experiences teaching in e-learning environments.
As the brief discussion above illustrates, each of these disciplines have contributed to our
understanding of how to make e-learning more effective, but each has utilized discipline specific
theories, focused on different aspects of e-learning and asked different research questions. Thus,
a major challenge facing researchers and designers of e-learning environments is the awareness
of existing research from these multiple disciplines and how and when to integrate findings.
Currently the disparate nature of these streams makes it challenging for those conducting
research on e-learning to uncover principles and guidelines that can aid instructional designers
in building sound distance training. (Salas & Cannon-Bowers, 2001, p. 483). Therefore, the
proposed presentation will focus on summarizing the key models and findings with respect to elearning from these disciplines. Specifically we will compare and contrast the research questions
of interest and the key findings from each discipline. Then, we will integrate findings from the
disciplines and focus on important questions still to be addressed. We will also offer directions
18
19
While project plans guide the sequence of activities and include critical paths, milestones,
and due dates for component assessment and completion, frequently little thought is given to
assessing teams and team leadership at various stages during the implementation process. A
review of the literature indicates that implementation delays and failures often are due to
problems within and between project teams (Barker & Frolick, 2003; Bingi, Sharma, & Godla,
1999; Ehie & Madsen, (2005) rather than with the technology itself.
The purpose of this presentation is to describe methods and provide an example of an
assessment of team effectiveness during the implementation process. The objective is to inform
management on assessment, problem identification, and team process improvement. As such,
this presentation meets both a research and practical need by providing insight to this missing
human factors component in ERP implementation efforts.
Sample
An example of an approach to use to assess cross-functional teams is highlighted by an
evaluation of a large scale ERP implementation project. The ERP consisted of: financials;
human resources; customer relationship management; supply chain management; and business
intelligence software modules. The project had been in progress for two years with a budget of
$125 million, 80% of which had been spent. The go-live goal date for the ERP was 12 months
away but it was recognized by upper management that there were serious problems related to
team functioning that was hindering project management effectiveness, the achieving of project
deadlines and if left to continue, threatened the completion of the implementation.
There were 140 staff working on the project consisting of 101 organizational employees,
from different functional areas, and 39 consultant and contract employees who participated on
one of eight teams. In addition there were 16 team leaders, one organization and one consultant
20
lead for each team. Data was gathered through the use of a team survey and a structured
interview for team leads. Participation in the team survey was 96% with 126 team members
voluntarily completing the survey. Participation in a structured interview by team leaders was
100%.
Method
The author used a multi-method evaluation approach to assess the effectiveness of the
teams functioning within and between in working toward establish goals and deadlines. This
approach included structured interviews of the 16 teams and a team member survey that included
attitudinal, evaluative, and open ended questions; the scales used were informed by prior team
assessment work (e.g., Brannick, Salas, & Prince, 1997; Lencioni, 2002). The interview
instrument included multiple sections to assess factors including: Team functioning; dependency
between their team and other teams; effort in information seeking and obtaining deliverable
performance by other teams; performance of project managers; rating of the performance of
teams (within and between); and items assessing their overall evaluation of the ERP
implementation project.
The survey instrument completed by team members included scales to assess: Effort in
information seeking; engagement; goal clarity; goal commitment; job characteristics; task
significance; job satisfaction; leadership effectiveness; team performance; perceived
organizational support; procedural and distributive justice; process clarity; role clarity; team
identification; trust within and between teams; urgency of deadlines; workload; and motivation.
Analysis and Results
In addition to assessment of the scales factor structure, quantitative analysis of survey
scale items, and statistical comparison of responses by teams, etc., thematic content analysis was
21
conducted separately for the structured interviews administered to the team leaders and the open
ended comment questions on the survey to deal with the large amount of qualitative data. For
example, 795 comments were provided for three open ended questions included on team member
survey. Themes and content areas were identified and using these, each comment was assessed
and labeled according to its primary theme. Following, the themes/subjects were ranked
according to their frequency of occurrence.
Statistical analysis of survey items along with content analysis identified areas and
factors of team effectiveness and ineffectiveness. The results from the team member surveys and
the team leader structured interviews provided a convergence of results. Because of the multimethod approach used and criteria assessed, we were able to pin-point problems that existed
among the different teams, which teams were effective, what areas of dysfunction existed, and
where to implement changes (both structural and attitudinal intervention). Following reporting
of results, management immediately made structural changes including changing project
management and team leadership, restructuring several of the eight teams, and improving
communication, decision-making, empowerment, and intervention training. This presentation
will provide details on the methods used, criteria assessed, analysis, and results with the
objective of providing an assessment approach to use for cross-functional teams in ERP/eHR
implementation projects.
22
topic (Kavanagh, Gueutal & Tannenbaum, 1990), and is the Co-Editor of a new book on Human
Resource Information Systems (Kavanagh & Thite, 2009). The symposium should run about 1
hours. The first presentation will be by Janet Marler and Sandra Fisher. They plan to present a
very intriguing review of the existing evidence on the relation between eHRM and strategic
human resource management. The second presentation by Dianna Stone, Kimberly Lukaszewski,
Eugene Stone-Romero, and Teresa Svacina will discuss the existing research on e-selection
systems, and offer directions for future research on the topic. The next presentation by Richard
Johnson will present a review the literature on e-learning and suggest ways of advancing
research on this topic. Finally, James Dulebohn will consider the factors affecting the
effectiveness of eHRM implementation, and present the results of a study on the topic. Kimberly
Lukaszewski will serve as the Chair of the session.
References
Aguinis, H., Henle, C. A., & Beaty, J. C. (2001). Virtual reality technology: A new tool for
personal selection. International Selection and Assessment, 9, 70-83
Aguinis, H., Mazurkiewicz, M. D., & Heggestad, E. D. (2009). Using web-based frame-ofreference training to decrease biases in personality-based job analysis: An experimental
field study. Personnel Psychology, 62, 405-438.
Wright, P. M., Dunford, B. B., Snell, S. A. (2001). Human resources and the resource based view
of the firm. Journal of Management 27, 701-721.
23