You are on page 1of 14

Regular Paper

Integrating Humans with Software and


Systems: Technical Challenges and a
Research Agenda
Azad M. Madni*

Intelligent Systems Technology, Inc., 12122 Victoria Avenue, Los Angeles, CA 90066
INTEGRATING HUMANS WITH SOFTWARE AND SYSTEMS

Received 24 February 2009; Accepted 17 April 2009, after one or more revisions
Published online 21 July 2009 in Wiley InterScience (www.interscience.wiley.com)
DOI 10.1002/sys.20145

ABSTRACT
As systems continue to grow in size and complexity, the integration of humans with software and systems
poses an ever-growing challenge. The discipline of human-system integration (HSI) is concerned with
addressing this challenge from both a managerial and technical perspective. The latter is the focus of this
paper. This paper examines this integration challenge from the perspective of capitalizing on the strengths
of humans, software, and systems while, at the same time, being mindful of their respective limitations. It
presents four key examples of HSI challenges that go beyond the usual human factors requirements. It
presents cognitive engineering as a key enabler of HSI and discusses the suitability of the Incremental
Commitment Model for introducing human considerations within the complex systems engineering
lifecycle. It concludes with a recommendation of specific research thrusts that can accelerate the matura-
tion and adoption of HSI methods, processes, and tools by the software and systems engineering
communities. © 2009 Wiley Periodicals, Inc. Syst Eng 13: 232–245, 2010

Key words: human-system integration; software engineering; systems engineering; cognitive engineering;
human performance; Incremental Commitment Model

1. INTRODUCTION Three Mile Island and Chernobyl. An equally compelling


example is that of the Patriot Missiles deployed in 2003 in the
The challenges that humans encounter in trying to operate or Iraq war. With this highly automated system, operators were
work with complex systems is well documented in the litera- trained to trust the system’s software— necessary design
ture dating back to the April 1991 Business Week cover story requirement for a heavy missile attack environment [Defense
entitled, “I Can’t Work This?#!!@ Thing.” This story focused Science Board, 2005]. As it turned out, the missile batteries
on the difficulties people typically encounter with consumer were operating in an environment with few missiles but many
products [Pew and Mavor, 2007]. However, this disconnect friendly aircraft in the vicinity. Compounding the problem
was the fact that the operators were not adequately trained to
between people and technology is well documented [Paras-
know that the Patriot radar system was susceptible to record-
uraman and Riley, 1997; Chiles, 2001; Hymon, 2009], and is
ing spurious hits and occasionally issuing false alarms (i.e.,
just as apparent in major large-scale system disasters such as identifying friendly aircraft as enemy missiles), without dis-
playing the inherent uncertainty in target identification. Un-
* E-mail: amadni@intelsystech.com derstandably, the operators, being unaware of these
limitations, were inclined to trust the system’s assessments
Systems Engineering Vol. 13, No. 3, 2010 and missile launch decisions against potentially hostile tar-
© 2009 Wiley Periodicals, Inc. gets. In fact, these factors were in play in the unfortunate shoot

232
INTEGRATING HUMANS WITH SOFTWARE AND SYSTEMS 233

down of a British Tornado and a U.S. Navy F/A-18, for which nical system (e.g., fighter aircraft versus air traffic control
a Defense Science Board report concluded that “more opera- system). HSI is methodology-neutral, architecture-agnostic
tor involvement and control in the functioning of a Patriot and concerned with satisfying stakeholder needs, especially
battery” would be necessary to overcome the system’s limi- operational stakeholders. From an HSI perspective, satisfying
tations [Defense Science Board, 2005, p. 2]. Even today, operational stakeholder needs means that:
system operators (e.g., pilots, air traffic controllers, power
plant operators and maintainers, and military personnel) con- • The right tradeoffs have been made between HSI con-
tinue to receive more than their fair share of blame for siderations and other systems considerations such as
systemic failures as noted by Chiles (as cited in Defense system security, dependability, survivability, and mis-
Science Board, 2005, p. 2): sion assurance.
• The joint human-machine system exhibits desired and
predictable behavior.
Too often operators and crews take the blame after a major
• The software and system part of the human-machine
failure, when in fact the most serious errors took place long
system is perceived as having high usability and utility
before and were the fault of designers or managers whose
by humans operating (or working with) the system.
system would need superhuman performance from mere mor-
tals when things went wrong.
• The design of the integrated human-machine system
circumvents the likelihood of human error especially
when adapting to new conditions or responding to
The primary design flaws that Chiles refers to were largely
contingencies.
failures in properly coordinating the interactions between
• The human-system integration strategy takes into ac-
people and technology in system development and deploy-
count technology maturity level and attendant uncer-
ment [Woods and Sarter, 2000]. Simply put, for systems that
tainties.
are designed to work with humans, it is important that systems
• The integrated human-machine system operates within
engineers give adequate consideration to operators and end
an acceptable risk envelope by adapting its functional-
users.
ity and operational configuration, as and when needed
to stay within the envelope.
2. INFUSING HUMAN FACTORS INTO • The integrated human-machine system satisfies mis-
SYSTEMS ENGINEERING sion effectiveness requirements while achieving an ac-
ceptable ROI for the various stakeholders.
In recent years, the DoD’s push to incorporate human factors Today, most human factors and ergonomic courses do not
discipline into systems engineering [Department of Defense address the range of requirements required for HSI. When
Handbook, 1996] led to the creation of the new multidiscipli- they do, they tend to address the human side of the equation,
nary field of Human Systems Integration (HSI). Since the not the performance, effectiveness, and dependability goals
advent of HSI, there has been a variety of views on the of the integrated human-machine system. Similarly, most
differences between human factors and HSI. The short an- systems engineering courses do not go into the necessary
swer, in accord with the definition of HSI, is that HSI sub- depth required to address HSI challenges and the integration
sumes human factors along with several other human-related of humans into software and systems. Not surprisingly, there
fields such as safety and survivability. However, this clarifi- is a lack of qualified HSI practitioners today. To remedy this
cation begs the question of how to get human factors profes- situation requires attacking the problem on three fronts. The
sionals to view human-related problems from a systems challenge for HSI practitioners is to mature and consolidate
perspective. To this end, the following definitions are offered. HSI practices for “prime-time” use [Madni, 1988b, 1988c].
Human factors is the study of the various ways in which Systems engineering educators need to incorporate HSI con-
humans relate to their external environment with the aim of cepts, methods, and tools in the systems engineering curricu-
improving operational performance while ensuring the safety lum. And, finally, the software and systems engineering
of job performers throughout a system’s lifecycle. Human communities need to foster a culture that embraces human-
System Integration is the study and design of interactions system integration (HSI) practices and guidelines as an inte-
among humans (operators, maintainers, customers, designers, gral part of the systems engineering lifecycle.
support personnel) and the systems they work with in a
manner that ensures safe, consistent and efficient relation-
ships between them with error avoidance. It is important to 3. ROAD TO THE PRESENT
realize that HSI is more than the sum of the contributions from
the disciplines it draws upon. HSI is intended to optimize the Human-systems integration (HSI) is a multidisciplinary field
joint performance of humans and systems in both normal comprising human factors engineering, system safety, health
operations and contingency situations. It advocates a full hazards, personnel survivability, manpower, personnel, train-
lifecycle view of the integrated human-machine system dur- ing, and habitability. HSI practices recommend that human
ing system definition, development and deployment. As im- considerations need to be an important priority in system
portantly, HSI focuses on all human and system roles during design/acquisition to reduce lifecycle costs and optimize hu-
the system lifecycle when applying human-system perform- man-system performance [Booher, 2003; Booher, Beaton,
ance assessment criteria and methods. The HSI applicability and Greene, 2009; Landsburg et al., 2008]. HSI encompasses
regime varies with the size and complexity of the sociotech- interdisciplinary technical and management processes for

Systems Engineering DOI 10.1002/sys


234 MADNI

integrating human considerations within and across all soft- example, workload can be defined as the ratio of total tasks
ware and system elements as part of sound software and to be completed and the time available to complete them, or
systems engineering practices. This paper is primarily fo- the number of concurrent cognitive tasks, or the size of the
cused on the technical challenges. task stack [Madni, 1988b, 1988c]. In reality, the most effective
Heretofore, the need to deliver systems within budget and HSI metrics, are those that are tied to context. For military and
on schedule has resulted in inadequate attention being given aerospace missions, context can be characterized by the skill
to HSI considerations in systems acquisition and systems set of humans involved, mission requirements and constraints,
engineering. Exacerbating the problem has been the fact that cost and schedule constraints, and performance parameters of
the human factors community has not done an effective job interest. Furthermore, the metrics need to be valid, reliable,
of communicating the value proposition of HSI to acquisition and relevant to the mission’s unique requirements.
managers, program managers, and systems engineers. In fact,
in 1970, Admiral Hyman Rickover characterized the promul-
gation of human factors considerations into R&D, engineer- 4. UNDERSTANDING HUMANS FROM AN HSI
ing, and production in shipbuilding as being “about as useful PERSPECTIVE
as teaching your grandmother how to suck an egg.” Since
then, three compelling developments led to the resurfacing of Before discussing the integration of humans with software
the importance of human considerations in systems acquisi- and systems, or integrating HSI into software and systems
tion/engineering. The first was a 1981 Government Account- engineering, it is necessary to examine human strengths and
ability Office (GAO) report that cited human error as a key limitations relative to software and systems and use that
contributor to system failure. The second was the highly knowledge to determine how best to integrate humans, soft-
publicized series of military, industrial, and commercial acci- ware, and systems [Madni, 1988a, 1988b, 1988c; Meister,
dents involving human error in the 1970s and 1980s. The third 1989]. With this perspective in mind, we present key findings
was the recognition within DoD that manpower costs were a from the literature that bear on human-system integration,
significant driver of total system lifecycle costs. And, most and, ultimately, on user acceptance (Table I). In what follows,
recently, an NRC [National Research Council, 2007] study four exemplar problems that illuminate HSI challenges are
suggested that the Incremental Commitment Model (ICM) presented.
was one potentially suitable framework (among others) for Who has the Final Say. In several fields (e.g., medical
addressing HSI concerns during the complex system develop- diagnosis, fighter aircraft automation, and power plant con-
ment lifecycle. The ICM, an outgrowth of spiral development, trol), determining who is in charge and who has the final say
starts with an approximate solution and then iterates in risk- is a key HSI problem. For example, the perennial question in
mitigated fashion with each iteration incrementally adding medical diagnosis is deciding when the medical community
capabilities until the final iteration which produces the full should trust the medical professional and when they should
capability set. trust the diagnostic aid. Trust and confidence are essential to
Despite this resurgence of interest in HSI and human error, achieve this “bond of confidence.” What this means is that
misconceptions about human performance continue to linger. there are circumstances when the human should not be al-
To begin with, experts in a particular domain (e.g., training) lowed to override the system and there are circumstances
tend to view the solution to a human performance problem as when the system should not be allowed to override the human.
being one that can be fully addressed from within their own With respect to medical diagnosis, the key is to appropriately
domain. Thus, a training specialist views human performance apply the combination of the medical professional and the
deficiency as a training problem with a training solution. diagnostic aid so that based on the circumstances, either the
Similarly, a human factors engineer views the same problem human or the aid or both are in a position to solve the problem.
as a man-machine interface design problem, while a man- The literature on human-machine systems offers several
power analyst views the same problem as a human re- examples of the complexities involved in designing decision
source/task allocation problem. In reality, the problem could aiding/performance support systems for cognitively demand-
be some combination of the above, requiring a composite ing environments. There is ample evidence that a suboptimal
solution. The second common misconception is that integrat- solution can produce performance degradation of the overall
ing HSI with non-HSI acquisition/systems engineering do- human-machine system. Some of the key findings in this
mains is the answer to ensuring that HSI is taken seriously. In regard are presented in Table II. An important aspect of such
reality, the HSI domain is quite fragmented itself and needs performance degradation is the lack of “fit” between the
to be integrated first. It is only after the HSI domain is cognitive demand of the work environment, the designed
internally integrated that the impact of human performance interventions, and the mental models of humans.
on system cost and schedule can be assessed and the business Risk Homeostasis. Wilde [2001] put forth the hypothesis
case made for HSI. The third common misconception is that that humans have their own fixed level of acceptable risk.
the science that informs and guides human performance is While initially subject to criticism, this hypothesis was borne
mature enough to be operationalized into principles and out in studies in Munich, Germany and in British Columbia,
guidelines for HSI. In reality, this science is quite new and Canada. In the Munich study, half a fleet of taxicabs was
subject to individual variation when compared to the physical equipped with anti-lock brakes (ABS), while the other half
sciences. The fourth common misconception is that human was provided conventional brake systems. Pursuant to testing,
performance metrics have unique definitions. In reality, some it was discovered that the crash rate was the same for both
of these metrics can have more than one interpretation. For types. Wilde concludes that this result was due to the fact that

Systems Engineering DOI 10.1002/sys


INTEGRATING HUMANS WITH SOFTWARE AND SYSTEMS 235

Table I. What We Know about Humans [Madni, 2008]

drivers of ABS-equipped cabs took more risks because they Assignment of Blame. In 2008, a Metrolink commuter train
assumed that the ABS would take care of them. By the same crashed headlong into a Union Pacific freight locomotive after
token, the non-ABS drivers drove more carefully since they going through four warning lights. The engineer (i.e., the
recognized that ABS would not be there to help them were driver) did not hit the brakes before the L.A. train crash. A
they to encounter hazardous situations. Likewise, it has been teenage train enthusiast later claimed to have received a cell
found that drivers are less careful around bicyclists wearing phone text message from the driver a minute before the
helmets than around unhelmeted riders. Wilde states that the collision. The possible reasons initially given for the crash
massive increase in car safety features has had little effect on included: The engineer (driver) was distracted while texting;
the rate or costs of crashes overall, and argues that safety the engineer was in the midst of an 111⁄2-h split shift when he
ran the first light; the possible breakdown of radio communi-
campaigns tend to shift rather than reduce risk [Wilde, 2001].
cation between engineer and conductor, who normally call
A related study in British Columbia was concerned with
each other by radio to confirm signals the engineer sees; the
anti-drunk driving government campaign in 1970. The find- engineer may have suddenly taken ill; sun glare may have
ings once again seem to support the risk homeostasis hypothe- obscured the engineer’s view of the signal.
sis. In this case, the accident rate associated with DUI was The official Metrolink report stated that Metrolink needs
reduced by approximately 18% over a 4-month period. How- to improve monitoring of its employees, enhance its safety
ever, accidents caused by other factors increased 19% during technology, and do more to inform its board members about
the same period. The explanation offered was that people how the railroad works. The major findings of this report,
started engaging in more hazardous actions on the road (i.e., written by a panel of industry experts, were presented to the
accepting higher risks) leaving “target risk levels” unchanged. Metrolink Board of Directors in December 2008. The panel

Table II. Examples of Human Performance Degradation

Systems Engineering DOI 10.1002/sys


236 MADNI

noted that Metrolink staff needed to improve its oversight of focus on combined human-system operation, not the charac-
the businesses that the rail carrier hires to run its trains. The teristics of each in isolation. This also means that the focus
panel also said that Metrolink board members—who have the should be on combined metrics, not individual metrics. And,
final say over the agency—need to be better informed about finally, a change in the operational environment can poten-
the railroad’s operations. In addition, the panel recommended tially change how people perceive and compensate for risks.
that video cameras be placed in locomotives to monitor train These considerations needs to be taken into account especially
engineers—something employee groups have protested— for contingency situations that change risks and perception of
and called for the agency to move quickly to adopt a GPS- risks and the compensatory measures that people employ.
based computer system to track train locations of [Hymon,
2009].
So, was the Metrolink train accident a human error, a 5. ROLE OF COGNITIVE ENGINEERING IN HSI
systemic problem that manifested itself as a human error, or AND SYSTEMS ENGINEERING
both? The answer is BOTH. The driver was doing a split-shift;
he was tired; he was also multitasking. Humans don’t multi- Operationalizing HSI from a technical perspective requires
task well and will be error-prone in such circumstances. methods and techniques from the field of cognitive engineer-
However, the system was also not designed for integration ing. Cognitive engineering draws on multiple disciplines in-
with the human. The system assumed an optimal human, i.e., cluding human factors engineering, cognitive psychology,
human-computer interaction, decision sciences, computer
one who could multitask, one who would not fatigue, and one
science, and other related fields. The primary motivation for
who was goal-driven or a utility maximizer. Humans are not
introducing cognitive engineering, as an HSI approach, into
any of these! This was an accident waiting to happen.
the systems engineering lifecycle is to prevent technology-in-
Indiscriminate Automation. A decade ago a blind side
duced design failures by explicitly taking into account human
indicator was developed for automobiles. It was intended to
processing characteristics within the context of task perform-
show an object in the driver’s blind side. However, this device ance and the operational environment. This characterization
was never approved, because behavioral research suggested covers all human roles (e.g., operator, tester, system adminis-
that drivers were going to overuse the indicator, and not bother trator, and maintainer) that come into play in the systems
to look back over their shoulder when changing lanes. This engineering lifecycle. Technology-induced design failures
would have been clearly an undesirable change in driver have, in part, resulted from the lack of understand of the rapid
behavior. The lesson clearly is that indiscriminate introduc- advances in technology. These advances have resulted in the
tion of technology without regard to desired behavior patterns automation of certain system functions which have changed
can potentially change human behavior, and not necessarily the operator’s role from being a controller of relatively simple
for the better. This kind of analysis is key to avoiding unin- systems to that of a supervisory controller of highly complex,
tended consequences [Madni, 2008; Madni and Jackson, automated systems. This shift in the role of human operators
2009]. has placed greater emphasis on their ability to: understand the
The foregoing examples illuminate several key factors. operation of complex systems that accomplish tasks under
First, in a tightly-coupled system, any change to the machine their supervision; access and integrate relevant information
will cause humans to change as well. Such a change could be rapidly; and monitor and compensate for system failures.
undesirable in the sense that it could lead to unintended Some of the major problems that can arise in systems that are
consequences. Second, unwarranted assumptions about the engineered without regard to cognitive considerations are
human can lead to tragic accidents [Madni, 2008]. For exam- presented in Table III.
ple, assuming that humans are optimal information processors The flipside of this equation is just as revealing. Cognitive
can lead to dire results because humans do fatigue and don’t engineering has certain limitations that prevent its widespread
multitask well. Third, the role of the human and the impor- adoption within systems engineering. First, cognitive engi-
tance of that role in the overall system is key to architectural neering has historically focused solely on the front-end analy-
paradigm and algorithm selection. Specifically, it is important sis portion of systems engineering. Second, cognitive
to determine whether humans are central to system operation, engineering is mostly focused on single operator, single work-
or merely an adjunct or enabler that is expected to be replaced station. The work in cognitive work analysis of a team of
by automation in the future. Fourth, system architects need to operators collaborating on a shared task is still in the nascent

Table III. Problems Arising from Failure To Incorporate Cognitive Engineering Principles

Systems Engineering DOI 10.1002/sys


INTEGRATING HUMANS WITH SOFTWARE AND SYSTEMS 237

stages. Third, cognitive engineers have yet to make a compel- Baker, 2001]. The studies of Kahneman and Tversky [1982]
ling Return-on-Investment (ROI) argument for acquisition have shown that experts fail to change or update their esti-
program managers despite ample supporting evidence. Nev- mates in view of new information. Mathematically, the failure
ertheless, there are several applications today, especially to update estimates means that P(A/B) = P(A/C); that is, the
within the military, that require the infusion of cognitive probability of A is not altered when the conditions governing
engineering within systems engineering. For example, the Air A are changed from B to C. This equation would only be true
and Space Operations Center (AOC) combat operations sce- if P(A) was independent of any conditioning; that P(A/C) =
narios provide an excellent platform for highlighting specific P(A/B) = P(A). However, in estimating probabilities, it is
gaps in systems engineering that can be filled with the intro- unlikely that any event would be totally independent of con-
duction of cognitive engineering considerations (e.g., criteria, ditions. There are other characteristics of humans that prevent
principles, and guidelines) within the systems engineering experts from being Bayesian. Some of these characteristics
lifecycle. include the inability to grasp the effects of sample size, the
In light of the foregoing, there is a need for a comprehen- frequencies of truly rare events, the meaning of randomness,
sive model that spans the end-to-end systems engineering and the effects of variability. These same failings also contrib-
lifecycle process and that embeds cognitive engineering prac- ute to human difficulty in estimating probabilities in general
tices, processes, and tools at appropriate points in that lifecy- [Kahneman and Tversky, 1982]. The human brain does not
cle. To successfully implement this approach, requires the follow the axioms (rules) of probability, such as all prob-
“buy-in” of three different classes of stakeholders: the engi- abilities lie in the [0, 1] interval and the sum of mutually
neering practitioner, the end user/operator; and government
exclusive probabilities must be 1. Consequently, the prob-
acquisition managers/industry representatives (Table IV).
abilities elicited from a human do not represent a true, mathe-
Today, after continual DoD prodding, the software and
matical probability measure. In short, starting with an
systems engineering communities are beginning to focus on
erroneous premise, this mindset not only creates conflict in
how best to incorporate the human component in system
design and development. the way that humans and the system “conceptualize” work
Misperceptions Linger. Despite the renewed interest in and “update their beliefs,” but also totally fails to capitalize
HSI in the software/systems engineering communities, mis- on human adaptability, ingenuity, and creativity.
conceptions linger. The single biggest misconception is that Methods and Tools. A variety of methods and tools are
software and system engineers continue to view humans as employed today by cognitive engineering and human factors
“suboptimal job performers.” This mindset naturally leads professionals [Madni, Sage, and Madni, 2005]. By far the
system engineers and designers to build systems that shore up most widely used tool is Cognitive Task Analysis (CTA). CTA
or compensate for human shortcomings. With this mindset, it is a collection of methods and techniques for describing,
is not surprising that humans are forced to operate systems modeling, and measuring the mental activities (i.e., cognition)
that are inherently incompatible with human conceptualiza- associated with task performance [Chipman, Schraagen, and
tion of the system. For example, when systems employ com- Shalin, 2000]. CTA has been used to assess throughput,
putational methods (e.g., Bayesian Belief Networks) that tend quality, and the potential for human errors in information
to be incompatible with the human’s conceptualization of the processing tasks. CTA, as originally developed, focused on
problem domain, these methods often produce manifestations individual cognition in task performance [Klinger and Hahn,
of human shortcomings. 2003]. CTA has a variety of forms. Klein [1993] identified
Humans, in general, are not necessarily Bayesian [Kahne- four broad classes of Cognitive Task Analysis methods: ques-
man and Tversky, 1982] in that human cognitive processes do tionnaires and interviews, controlled observation, critical in-
not seem to naturally follow the Bayesian philosophy. The cidents, and analytical methods (Table V). These methods
fact that humans are not Bayesian has been demonstrated in vary with respect to how they elicit/represent expert knowl-
both laboratory settings and actual applications [Meyer and edge, and contribute to expert performance.

Table IV. Challenges

Systems Engineering DOI 10.1002/sys


238 MADNI

Table V. Cognitive Task Analysis Approaches

Central to CTA are the concepts of taskwork and teamwork ways that the team coordinates and acquires an understanding
which were developed to differentiate between individual and of the different team members and then synthesizes task
team tasks [Morgan et al., 1986]. Taskwork consists of indi- elements [Klinger and Hahn, 2003, Bordini, Fisher, and
viduals performing individual tasks, whereas teamwork con- Sierhuis, 2009]. Team CTA today is conducted to assess gaps
sists of individuals interacting or coordinating tasks that are in team-specific and task-specific competencies. Team-spe-
important to the goals of a team [Baker, Salas, and Cannon- cific competencies typically apply to a particular team, and
Bowers, 1998]. With respect to the latter, Bowers, Baker, and encompass tasks that are performed by the team. Task-specific
Salas [1994] compiled a team task inventory by identifying competencies, on the other hand, apply only to certain tasks.
teamwork behaviors that were subsequently reviewed and The findings of Cannon-Bowers et al. [1995] suggest 11
refined by Subject-Matter Experts (SMEs). Thereafter, re- knowledge requirements, 8 specific skill dimensions, and 9
spondents were asked to rate each task (behavior) on multiple attitude requirements for a team. Stevens and Campion’s
factors (e.g., importance to train, task criticality, task fre- [1994] research appears to corroborate these findings.
quency, task difficulty, difficulty to train, overall task impor- Today a variety of methods are used to perform team CTA
tance). The results showed that distinguishing teamwork from [Bonaceto and Burns, 2007] including the use of simulators
taskwork is important and that further research was required to examine team processes and performance [Weaver et al.,
to study team behaviors such as interaction, coordination, 1995; Madni, Sage, and Madni, 2005]. These simulator/simu-
relationships, cooperation, and communication [McIntyre lation-based approaches allow for observable outcomes as
and Salas, 1995]. In the same vein, Dieterly [1988] identified
well as subjective assessments of team performance.
eight dimensions along which tasks can be decomposed.
Advancing the State of Readiness of Cognitive Engi-
These dimensions were grouped into two categories: task
neering. One of the main challenges in infusing cognitive
characteristics that were independent of the team concept, and
task characteristics that specifically applied to a team context. engineering principles and practices into systems engineering
The focus of the research became methods for identifying is to make the value proposition of cognitive engineering clear
team tasks, measuring team-level concepts, and integrating to acquisition managers, program managers, and industry
teamwork behaviors into traditional task analysis methods. It practitioners, who typically face stringent schedules, devel-
was soon discovered that until these issues were explicitly opment risks, and tight budgets. So, the question that needs
addressed, there would continue to be a void when analyzing to be answered, first and foremost, for acquisition managers
team tasks. This void was confirmed by Bowers, Baker, and is this: Is cognitive engineering ready for prime time use? To
Salas [1994], who found a large proportion of unexplained answer this question, we need to examine the state of readi-
variance when applying task analysis to a team. ness of cognitive engineering. Fortunately, there are several
Over the last decade, however, cognitive engineering re- telltale indicators of whether or not a technology is ready for
searchers began to specifically focus on team CTA, which adoption (Fig. 1). This figure, which presents the technology
differs from traditional CTA in that it explicitly focuses on adoption lifecycle (starting with skepticism and concluding
teamwork requirements [Baker, Salas, and Cannon-Bowers, with adoption), places the state of readiness of cognitive
1998]. The starting point used for team CTA was capturing engineering at the enthusiasm stage. This stage corresponds
the cognitive processes of a team by focusing on the various to the fact that a few positive experiences have been recorded

Systems Engineering DOI 10.1002/sys


INTEGRATING HUMANS WITH SOFTWARE AND SYSTEMS 239

Figure 1. Technology adoption lifecycle. [Color figure can be viewed in the online issue, which is available at www.interscience.wiley.com.]

with pilot experiments. Table VI further elaborates on the state tools. From a cultural perspective, it is important to cultivate
of readiness of cognitive engineering. program managers who have an open, receptive mind and
In light of the foregoing, a multipronged strategy is re- who are responsible for the introduction of human fac-
quired to garner the attention of acquisition/program manag- tors/cognitive engineering in the system’s lifecycle. As impor-
ers and industry practitioners. The first is to make the business tantly, it is important to convey the value proposition of
case for cognitive engineering in terms of return-on-invest- cognitive engineering to program managers in both qualita-
ment (ROI) so we can move the acquisition/program manag- tive and quantitative terms (e.g., ROI, elimination of rework,
ers and industry practitioners up the ladder of “predisposition reduction in human error rates). And, when it comes to the
to act” in the technology adoption lifecycle (Fig. 1). maturity of the available tools, stretch goals need to be defined
The second is to overcome the barriers to adoption of for tool vendors and the systems engineering community
cognitive engineering. Barriers to implementing cognitive needs to be exposed to the artifacts that can be created using
engineering in systems integration programs stem from three cognitive engineering tools and the impact of those artifacts
sources: (a) prevailing culture of systems integration program on systems engineering.
personnel and (b) near-term challenges that perennially con- The third is to leverage the network of professional socie-
front program managers. Table VII characterizes these barri- ties (e.g., INCOSE, HFES, IEEE SMC), industry associations
ers. (e.g., NDIA), and technology forums at universities to spread
The strategies to overcome these barriers need to squarely the word and grow a following. Publishing in systems engi-
focus on program management, culture, and maturity of the neering journals and presenting at major conferences is also

Table VI. State of Readiness of Cognitive Engineering

Systems Engineering DOI 10.1002/sys


240 MADNI

Table VII. Barriers to Infusion of Cognitive Engineering into Systems Engineering

key to cultivating a following. The third is to make cognitive After analyzing several candidate system development
systems engineering part of graduate studies in systems engi- models in terms of these five principles, the NRC committee
neering and distance learning curriculum at major universi- selected the Incremental Commitment Model as the systems
ties. The fourth is to document case studies that can be shared engineering framework approach to examine various catego-
with the systems engineering community. The case studies ries of methodologies and tools that provide information
framework proposed by Friedman and Sage [Friedman and about the environment, the organization, the work, and the
Sage, 2004] holds promise in this regard. Collectively, these human operator at each stage of the design process [Pew and
strategies can advance the state of readiness of cognitive Mavor, 2007]. Although the ICM is not the only model that
engineering, and make it attractive for adoption by acquisi- could be used for this purpose, it provides a convenient and
tion/program managers and industry advocates [Madni, Sage, robust framework for investigating HSI concepts. A central
and Madni, 2005]. focus of the model is the progressive reduction of risk through
the system development life-cycle, to produce a cost-effective
system that satisfies the needs of the different stakeholders.
6. NATIONAL RESEARCH COUNCIL STUDY The committee concluded that the use of the ICM can achieve
RECOMMENDATIONS a significant improvement in the design of major systems,
particularly with regard to human-system integration. Table
The recently completed DoD-sponsored NRC study, “HSI in VIII presents a summary of the recommendations offered by
the System Development Process,” identified five core prin- this committee.
ciples that are critical to the success of human-intensive The committee’s research recommendations include: (a)
system development and evolution: satisfying the require- the development of shared representations to enable meaning-
ments of the system stakeholders (i.e., buyers, developers ful communications among hardware, software, and HSI de-
including engineers and human factors specialists, and users); signers as well as within the human-system design group and
incremental growth of system definition and stakeholder com- within the stakeholder community; (b) the extension and
mitment; iterative system definition and development; con- expansion of existing HSI methods and tools including mod-
current system definition and development; and management eling and simulation methods, risk analysis, and usability
of project risks. evaluation; and (c) the full integration of human systems and

Table VIII. NRC Committee Recommendations

Systems Engineering DOI 10.1002/sys


INTEGRATING HUMANS WITH SOFTWARE AND SYSTEMS 241

engineering systems. Table IX summarizes their recommen- ments of systems engineering. Specifically, it supports the
dations. concurrent exploration of needs and opportunities, and con-
The Incremental Commitment Model. The Incremental current engineering of hardware and software with the neces-
Commitment Model (ICM) is a risk-driven extension of the sary emphasis on human considerations. It employs anchor
spiral model [Boehm, 1988]. It achieves progressive risk point milestones to synchronize and stabilize concurrently
reduction in system development to produce a cost-effective, engineered artifacts. These characteristics of the ICM make it
stakeholder-responsive system. With ICM, cost-effectiveness well suited as an “integration platform” for HSI.
is achieved by focusing resources on high-risk aspects of the
development and deemphasizing aspects that are viewed as
posing limited risk. All forms of potential risk, including 7. DEVELOPING A RESEARCH AGENDA
hardware, software, and HSI risks, are assessed to identify
In light of the foregoing, we can define the research issues
risk-reduction strategies at each stage in the system develop- that need to be addressed when infusing HSI principles and
ment process. The model recognizes that, in very large, com- criteria into software and systems engineering (Table X).
plex systems, requirements change and evolve throughout the Building on the NRC findings, the following paragraphs
design process. As such, the approach to acquisition is incre- address each of these research issues.
mental and evolutionary: acquiring the most important and These are difficult questions to answer. Their answers will
well-understood capabilities first; working concurrently on be forthcoming gradually as greater emphasis is placed on
engineering requirements and solutions; using prototypes, HSI research. It is in this spirit that the following research
models, and simulations as ways of exploring design impli- thrusts are presented.
cations to reduce the risk of specifying inappropriate require- HSI Problem Identification. At the outset, it is important
ments; and basing requirements on stakeholder involvement to identify the underlying concerns that motivate the introduc-
and assessments. When tradeoffs among cost, schedule, per- tion of HSI in the software/systems engineering process. The
formance, and capabilities are not well understood, the model underlying problems could be one or more of the following:
provides a framework to specify priorities for the capabilities The system is too difficult to operate, unacceptably high
and ranges of satisfactory performance, rather than insist on human error rates, the system is not being used at all or not
providing precise and unambiguous requirements. ICM con- being used as intended, the system is too hard to maintain, the
sists of five phases: exploration, valuation, architecting, de- system is too expensive, the system does not scale. To this
velopment, and operation. Each phase revisits every single end, research is needed in advancing the state-of-the-art in
major activity: system scoping, goals and objectives, require- virtual prototyping and human-machine interaction simula-
ments and evaluation, operations and retirement. The specific tion.
level of the effort on each activity is risk-driven and thus varies Development of a Shared Representation. Per the NRC’s
across life-cycle phases and from project to project. ICM recommendation, the development of a shared representation
provides a suitable framework for concurrently investigating is key to enabling meaningful communication among hard-
and integrating hardware, software and human factors ele- ware, software, and HSI personnel as well as within the HSI

Table IX. Research Recommendations

Systems Engineering DOI 10.1002/sys


242 MADNI

Table X. Research Questions

personnel and the stakeholder community. To this end, the [Madni, 1988b, 1988c; Wickens and Hollands, 1999]. Exam-
development of a common ontology and a lexical data base ples of such functions/jobs are air traffic control, fighter
that eliminates the polysemy and synonymy problems can aircraft operations, military command and control, nuclear
serve as a starting point for such discourse. power plant operation, and anesthesiology operations. The
Expansion of Existing Methods and Tools. Thus far the key characteristics of high cognitive load tasks are that they
modeling and simulation tools, as well as risk analysis and are stimulus-driven (i.e., not self-paced), they produce large
usability evaluation methods, have focused on front-end fluctuations in demand, they involve multi-tasking, they gen-
analysis and taken a narrow view of human-system integra- erate high stress, and they are highly consequential. The basic
tion according to the NRC study. Research is needed to extend approaches to measuring cognitive load are: analytic (e.g.,
the methods and tools to span the full system lifecycle while task difficulty, number of simultaneous tasks), task perform-
also enriching the scope of the modeling, simulation, and ance (i.e., primary versus secondary task performance), physi-
analytic tools. ological (i.e., arousal/effort) and subjective assessment (e.g.,
Full Integration of Human Systems and Systems Engineer-
Cooper-Harper ratings, Subjective Workload Assessment
ing. The integration of HSI within systems engineering is not
Technique or SWAT). Research is needed in developing hu-
just a technical issue. It is also an organizational and cultural
man performance models and simulations that exhibit the
issue. As such, the principles of HSI need to be applied to
design and development organizations, not just to the systems requisite human performance/behavioral profile as a function
being procured. The NRC study recommends the full integra- of stress and work demands. Such models can then be used in
tion of human systems with systems engineering across the “test-driving” the goodness of HSI in a particular system
seven key areas presented in the report. To this end, research design and also comparing two or more candidate designs.
is needed in extending HSI methods to complex systems and Architecture Design. The design of the human-system
SoS, while recognizing the trend that SoS and complex sys- architecture is highly dependent on the roles that humans play
tems are gradually coming together in terms of how they are in the overall system. In particular, human roles have a sig-
viewed. nificant impact on the architecture depending on whether the
Human Performance Modeling. Human Performance var- human is central to the system or merely an enabling agent
ies nonlinearly with a variety of factors such as stress, anxiety [Madni, 2008]. Table XI presents the different human roles
level, workload, fatigue, and motivation level. For example, that bear on architectural design. Research is needed in archi-
the Yerkes-Dodson law shows that as stress increases, so does tecture design with various levels of involvement of the hu-
performance, up to a point after which it round out and tapers man. In particular, a human performance testbed needs to be
off (the inverted U-Curve). Cognitive workload becomes a developed that can support architecture sensitivity analysis to
key concern in several functions/jobs that are mentally taxing changes in critical human parameters.

Table XI. Human Roles and Architectural Implications

Systems Engineering DOI 10.1002/sys


INTEGRATING HUMANS WITH SOFTWARE AND SYSTEMS 243

System Inspectability. System inspectablity is key to en- to capturing these findings in the form of architectural pat-
suring that the human operator does not make erroneous terns.
assumptions about the system state and then act on that
erroneous assumption causing damage to the safe operation
of the human-machine systems. Cognitive compatibility is 8. CONCLUSIONS
concerned with assuring that system “processes” are pat-
terned after or compatible with the human’s conceptualization The ultimate goal of system development is to deliver a
of work tasks. In particular, a cognitive system is one that system that satisfies the needs of its stakeholders (e.g., users,
employs psychologically plausible computational repre- operators, maintainers, owners) given adequate resources to
sentations of human cognitive processes as a basis for system achieve the goal. From a HSI perspective, satisfying stake-
designs that seek to engage the underlying mechanisms of holder needs means developing a system that delivers requi-
human cognition and augment the cognitive capacities of site value to the various stakeholders, is predictable and
humans, not unlike a cognitive prosthesis. In this definition, dependable, and capitalizes on the respective strengths of
emphasis is placed on psychologically plausible machine- humans and software/systems technologies while circum-
based representations, the key distinguishing characteristic of venting their limitations. Realizing such a system requires:
systems that are patterned in a manner compatible with the having an understanding of various aspects of human behav-
human’s conceptualization of the task (i.e., task schema).
ior (e.g., cognitive strategies, risk perception, social and cul-
Research in the development of inspectable cognitive systems
with explanatory facilities are especially relevant to optimiz- tural considerations); defining HSI metrics that allow one to
ing human-system compatibility and ultimately achieving make judgments about which system from multiple candi-
robust human-systems integration. dates exhibits the better HSI; managing a variety of risks
Consolidating human performance Body of Knowledge. associated with, for examples, user characteristics, deploy-
At the present time, the human performance body of knowl- ment environment constraints, and technological maturity.
edge is highly fragmented. Some of the main categories are: Cognitive engineering and the Incremental Commitment
human workload (cognitive and psychomotor), human deci- Model are two key enablers that contribute to the achievement
sion making (under normal and time-stressed conditions; in of these objectives. Specifically, cognitive engineering pro-
the face of uncertainty and risk); human perception of risk and vides insights into human cognitive processes and behaviors,
risk homeostasis; sociocultural factors in human decision and the impact of various factors on human performance
making and consensus building; human vigilance and including the likelihood of human error. The ICM provides a
arousal; physiological/mental stress and fatigue. There are
risk-driven, incremental process for incorporating HSI prin-
several others as well. Research is needed in determining how
these various considerations interact, in what situations they ciples and for investigating and resolving HSI issues during
interact, and how. In this regard, organizational simulations each stage of development.
can provide new insights [Rouse and Bodner, 2009]. There At the present time, human capabilities and limitations and
are other measures that can be taken as well. For example, an their implications on the design, deployment, operation, and
updated and online version of Boff and Lincoln’s Engineering maintenance of systems have not been explicitly addressed in
Data Compendium [1988] can provide a useful starting point systems engineering and acquisition lifecycles. The discipline
for some of these considerations. Another possibility is to of HSI is specifically intended to remedy this problem. How-
provide access to such information in a Wikipedia format. ever, for HSI to take hold within the systems acquisition and
Integrated Aiding-Training Continuum. Historically, op- engineering communities, several advances need to occur.
erator (and team) training and aiding have been viewed as First, the fragmented body of knowledge in human perform-
distinct capabilities. However, recent research has shown that ance needs to be consolidated, expanded, and transformed
aiding and training lie along a continuum of human perform- into a form that lends itself to being incorporated into software
ance enhancement [Madni and Madni, 2008]. Research is
and systems engineering practices. Second, the HSI commu-
needed in defining the architecture of an integrated aiding-
training framework capable of dynamically repurposing nity needs to make the business case to communicate the value
Shareable Content Objects (SCOs) for aiding, e-learning, proposition of HSI. Third, systems acquisition and systems
just-in-time-training, or on-demand performance support engineering policies and incentives need to be changed. Each
based on user needs and the operational context. of these recommendations provides the basis for the specific
HSI Patterns. Humans interact with systems differently research thrusts recommended in this paper.
based on their role(s) relative to the system (i.e., supervisory
controller, monitor, enabler, or supporting agent). The archi-
tecture for each of this context can be expected to be different ACKNOWLEDGMENTS
and potentially amenable to characterization through patterns.
In certain complex systems, the human role can be expected The author gratefully acknowledges discussions with Dr.
to change from, for example, supervisor/controller to an en- Donald MacGregor, Professor Andy Sage, Professor Barry
abler based on changes in context. As such, the architecture Boehm, Carla Madni, and the research staff at Intelligent
needs to be adaptive and the pattern needs to reflect this Systems Technology, Inc. during the writing of this paper. The
characteristic. Research is needed in defining various types of author also gratefully acknowledges the review of the refer-
architectures and their adaptation requirements, with a view ences for completeness by Dr. Shaun Jackson.

Systems Engineering DOI 10.1002/sys


244 MADNI

REFERENCES expert judgment, IEEE Trans Syst Man Cybernet 17 (1987),


753–770.
D.P. Baker, E. Salas, and J.A. Cannon-Bowers, Team task analysis: S. Hymon, Metrolink report urges more oversight, safety equipment,
Lost but hopefully not forgotten, Indust Org Psychol 35 (1998), Los Angeles Times, January 8, 2009, p. 85.
79–83. D. Kahneman and A. Tversky, “The simulation heuristic,” Judgment
B. Boehm, A spiral model of software development and enhance- under uncertainty: heuristics and biases, D. Kahneman, P. Slovic,
ment, IEEE Comput 21 (1988), 61–72. and A. Tversky (Editors), Cambridge University Press, New
K.R. Boff and J.E. Lincoln, Engineering data compendium: Human York, 1982, pp. 201–208.
perception and performance, Harry G. Armstrong Aerospace A. Kirlik, Modeling strategic behavior in human-automation inter-
Medical Research Laboratory, Wright-Patterson AFB, OH, June action: Why an “aid” can (and should) go unused, Hum Factors
1988. 35 (1993), 221–242.
C. Bonaceto and K. Burns, A survey of the methods and uses of G.A. Klein, Naturalistic decision-making: implications for design
cognitive engineering, Expertise out of Context, Proc Sixth Int (SOAR 93-1), Crew Systems Ergonomics Information Analysis
Conf Nationwide Decision Making, 2007, pp. 29–75. Centre, Wright-Patterson AFB, CA, 1993.
H.R. Booher, Handbook of human-systems integration, Wiley Series G. Klein, Implications of the naturalistic decision making framework
in Systems Engineering and Management, Andrew Sage, Series for information dominance, Report No. AL/CF-TR-1997-0155,
Editor, Wiley, Hoboken, NJ, 2003. Armstrong Laboratory, Human Engineering Division, Wright-
H.R. Booher, R. Beaton, and F. Greene, “Human systems integra- Patterson AFB, OH, 1997.
tion,” Handbook of systems engineering and modeling, A. Sage D. Klinger and B. Hahn, Handbook of TEAM CTA, Contract
and W.B. Rouse (Editors), Wiley, Hoboken, NJ, 2009, pp. 1319– F41624-97-C-6025, Human Systems Center, Brooks AFB, Klein
1356. Associates, Salem, NH, 2003.
R. Bordini, M. Fisher, and M. Sierhuis, Formal verification of A.C. Landsburg, L. Avery, R. Beaton, J.R. Bost, C. Comperatore, R.
human-robot teamwork, 4th ACM/IEEE Int Conf Human-Robot Khandpur, T.B. Malone, C. Parker, S. Popkin, and T.B. Sheridan,
Interaction (HRI 2009), 2009, pp. 267–268. The art of successfully applying human systems integration, Nav
Eng J 120 (2008), 77–107.
C. Bowers, D. Baker, and E. Salas, Measuring the importance of
A.M. Madni, The role of human factors in expert systems design and
teamwork: The reliability and validity of job/task analysis indices
acceptance, Hum Factors J 30 (1988a), 395–414.
for team training design, Mil Psychol 6 (1994), 205–214.
A.M. Madni, HUMANE: A designer’s assistant for modeling and
J.A. Cannon-Bowers, S.I. Tannenbaum, E. Salas, and C.E. Volpe,
evaluating function allocation options, Proc Ergon Adv Manuf
“Defining competencies and establishing team training require-
Automat Syst Conf, Louisville, KY, August 16–18, 1988b, pp.
ments,” Team effectiveness and decision making in organiza-
291–302.
tions, R.A. Guzzo & E. Salas (Editors), Jossey-Bass, San
A.M. Madni, HUMANE: A knowledge-based simulation environ-
Francisco, CA, 1995, pp. 333–381.
ment for human-machine function allocation, Proc IEEE Natl
J.R. Chiles, Inviting disaster: Lessons from the edge of technology,
Aerospace Electron Conf, Dayton, OH, May, 1988c, pp. 860–86.
Collins, New York, 2001.
A.M. Madni, Integrating human factors, software and systems engi-
S.F. Chipman, J.M. Schraagen, and V.L. Shalin, “Introduction to neering: Challenges and opportunities, Invited presentation at
cognitive task analysis,” Cognitive task analysis, J.M. Schraagen, Proc 23rd Int Forum COCOMO Syst/Software Cost Model ICM
S.F. Chipman, and V.L. Shalin (Editors), Erlbaum, Mahwah, NJ, Workshop 3, Davidson Conference Center, University of South-
2000, pp. 41–56. ern California, Los Angeles, October 27–30, 2008.
Defense Science Board, Defense Science Board Task Force on A.M. Madni and S. Jackson, Towards a conceptual framework for
Patriot System Performance, Report Summary DTIC No. resilience engineering, IEEE Syst J 3 (2009).
ADA435837, Department of Defense, Washington, DC, January A.M. Madni and C.C. Madni, GATS : A Generalizable Aiding-
2005. Training System for human performance and productivity en-
Department of Defense Handbook, Human Engineering Program hancement, Phase I Final Report, ISTI-FR-594-01/08, Contract
Process and Procedures, MIL-HDBK-46855, Department of De- # FA8650-07-M-6790, Intelligent Systems Technology, Los An-
fense, Washington, DC, January 31, 1996. geles, CA, January 3, 2008.
D.L. Dieterly, “Team performance requirements,” The job analysis A.M. Madni, A. Sage, and C.C. Madni, Infusion of cognitive engi-
handbook for business, industry, and government, S. Gael (Edi- neering into systems engineering processes and practices, Proc
tor), Wiley, New York, NY, 1988, pp. 766–777. 2005 IEEE Int Conf Syst Man Cybernet, Hawaii, October 10–12,
G. Friedman and A.P. Sage, Case studies of systems engineering and 2005, pp. 960–965.
management in systems acquisition, Syst Eng 7 (2004), 84–97. R.M. McIntyre and E. Salas, “Measuring and managing for team
S.A.E. Guerlain, Factors influencing the cooperative problem-solv- performance: Emerging principles from complex environments,”
ing of people and computers, Proc Hum Factors Ergonom Soc Team effectiveness and decision making in organizations, R.A.
37th Annual Meeting, Human Factors and Ergonomics Society, Guzzo and E. Salas (Editors), Jossey-Bass, San Francisco, CA,
Santa Monica, CA, 1993, pp. 387–391. 1995, pp. 9–45.
K.R. Hammond, Human judgment and social policy: Irreducible D. Meister, Conceptual aspects of human factors, The Johns Hopkins
uncertainty, inevitable error, unavoidable justice, Oxford Univer- University Press, Baltimore, MD, 1989.
sity Press, New York, 1996. A.M. Meyer and J.W. Baker, Eliciting and analyzing expert judg-
K.R. Hammond, R.M. Hamm, J. Grassia, and T. Pearson, Direct ment: A practical guide, SIAM, Los Alamos National Laboratory,
comparison of the efficacy of intuitive and analytic cognition in Los Alamos, NM, 2001.

Systems Engineering DOI 10.1002/sys


INTEGRATING HUMANS WITH SOFTWARE AND SYSTEMS 245

B.B. Morgan, A.S. Glickman, E.A. Woodard, A.S. Blaiwes, and E. T.B. Sheridan, Telerobotics, automation, and human supervisory
Salas, Measurement of team behavior in a Navy training envi- control, MIT Press, Cambridge, MA, 1992.
ronment, Technical Report TR-86-014, Naval Training Systems T.B. Sheridan, Humans and automation: System design and research
Center, Human Factors Division, Orlando, FL, 1986. issues, Wiley, Hoboken, NJ, 2002.
K.L. Mosier and L.J. Skitka, “Human decision makers and auto- P. Slovic and A. Tversky, Who accepts savage’s axiom? Behav Sci
mated aids: Made for each other?” Automation and human 19 (1974), 368–373.
performance: Theory and applications, R. Parasuraman and M. M.J. Stevens and M.A. Campion, The knowledge, skills, and ability
Moulousa (Editors), Erlbaum, Mahwah, NJ, 1996, pp. 201–220. requirements for teamwork: Implications for human resource
National Research Council, Committee on Human-System Design management, J Management 20 (1994), 503–530.
Support for Changing Technology, “Human-system integration J.A.F. Stoner, Risky and cautious shifts in group decisions: The
in the system development process: A new look,” Committee on influence of widely held values, J Exper Soc Psychol 4 (1968),
Human Factors, Division of Behavioral and Social Sciences and 442–459.
Education, R.W. Pew and A.S. Mavor (Editors), National Acade- M.A. Wallach, N. Kogan, and D.G. Bern, Diffusion of responsibility
mies Press, Washington, DC, 2007, Chap. 3, pp. 55–74. and level of risk-taking in groups, J Abnorm Soc Psychol 68
R.E. Nisbett and T.D. Wilson, Telling more than we know: Verbal (1964), 263–274.
reports on mental processes, Psychol Rev 84 (1977), 231–259. M.A. Wallach, N. Kogan, and D.G. Bern, Group influence on
R. Parasuraman and V. Riley, Humans and automation: Use, misuse, individual risk taking, J Abnorm Soc Psychol 65 (1962), 75–86.
disuse, abuse, Hum Factors 39 (1997), 230–253. J. Weaver, C. Bowers, E. Salas, and J. Cannon-Bowers, Networked
R. Parasuraman, R. Molly, and I.L. Singh, Performance conse- simulations: New paradigms for team performance research,
quences of automation-induced complacency, Int J Aviation Psy- Behav Res Meth Instrum Comput 21 (1995), 12–24.
chol 3 (1993), 1–23. C.D. Wickens and J.G. Hollands, Engineering psychology and hu-
R.W. Pew and A.S. Mavor, Human-system integration in the system man performance, Pearson, Toronto, ON, Canada, 1999.
development process: A new look, National Academies Press, G.J.S. Wilde, Target risk 2: A new psychology of safety and health:
Washington, DC, 2007. What works? What doesn’t? And why? PDE Publications,
R.E. Redding, Perspectives on cognitive task-analysis: The state of Toronto, ON, Canada, 2001.
the state of the art, Proc 33rd Annu Meet Hum Factors Soc, Santa D.D. Woods, and N.B. Sarter, “Learning from automation surprises
Monica, CA, 1989, pp. 1348–1352. and going sour accidents,” Cognitive engineering in the aviation
W.B. Rouse and D.A. Bodner, “Organizational simulation,” Hand- domain, N. Sarter and R. Amalberti (Editors), Erlbaum,
book of systems engineering and modeling, A.P. Sage and W.B. Hillsdale, NJ, 2000, pp. 327–353.
Rouse (Editors), Wiley, Hoboken, NJ, 2009, pp. 763–790. R.M. Yerkes and J.D. Dodson, The relation of strength of stimulus
T.B. Sheridan, Man-machine systems, MIT Press, Cambridge, MA, to rapidity of habit formation, J Compar Neurol Psychol 18
1974. (1908), 459–482.

Azad Madni (Fellow) received his B.S., M.S., and Ph.D. degrees in engineering from UCLA with specialization in
man-machine environment systems. He is the CEO and Chief Scientist of Intelligent Systems Technology, Inc. He has
received several awards and commendations from DoD and the commercial sector for his pioneering research in
modeling and simulation in support of concurrent engineering, agile manufacturing, and human-systems integration.
He received the 2008 President’s Award and the 2006 C.V. Ramamoorthy Distinguished Scholar Award from the Society
for Design and Process Science (SDPS). In 2000 and 2004, he received the Developer of the Year Award from the
Technology Council of Southern California. In 1999, he received the SBA’s National Tibbetts Award for California for
excellence in research and technology innovation. He has been a Principal Investigator on seventy-three R&D projects
sponsored by DoD, NIST, DHS S&T, DoE, and NASA. His research interests are game-based simulations, enterprise
systems architecting and transformation, adaptive architectures, shared human-machine decision making systems, and
human-systems integration. Dr. Madni is a past president of SDPS and is the Editor-in-Chief of the society’s journal.
He is also a fellow of INCOSE, SDPS, and IETE. He is listed in Marquis’ Who’s Who in Science and Engineering, Who’s
Who in Industry and Finance, and Who’s Who in America.

Systems Engineering DOI 10.1002/sys

You might also like