You are on page 1of 37

A Review of 40 Years of Cognitive Architecture Research

Focus on Perception, Attention, Learning and Applications
Iuliia Kotseruba Oscar J. Avella Gonzalez John K. Tsotsos

Dept. of Electrical Engineering and Computer Science,
York University, 4700 Keele St., Toronto, ON, Canada M3J 1P3

In this paper we present a broad overview of the last 40 years of research on cognitive architectures. Although the number of
existing architectures is nearing several hundred, most of the existing surveys do not reflect this growth and focus on a handful of
well-established architectures. While their contributions are undeniable, they represent only a part of the research in the field. Thus,
in this survey we wanted to shift the focus towards a more inclusive and high-level overview of the research in cognitive
architectures. Our final set of 86 architectures includes 55 that are still actively developed, and borrow from a diverse set of
disciplines, spanning areas from psychoanalysis to neuroscience. To keep the length of this paper within reasonable limits we
discuss only the core cognitive abilities, such as perception, attention mechanisms, learning and memory structure. To assess the
breadth of practical applications of cognitive architectures we gathered information on over 700 practical projects implemented
using the cognitive architectures in our list.
We use various visualization techniques to highlight overall trends in the development of the field. For instance, our data
confirms that the hybrid approach to cognitive modeling already dominates the field and will likely continue to do so in the future.
Our analysis of practical applications shows that most architectures are very narrowly focused on a particular application domain.
Furthermore, there is an apparent gap between general research in robotics and computer vision and research in these areas within
the cognitive architectures field. It is very clear that biologically inspired models do not have the same range and efficiency
compared to the systems based on engineering principles and heuristics. Another observation is related to a general lack of
collaboration, which is surprising to see in the inherently interdisciplinary area of cognitive architectures. Several factors hinder
communication, among which are the closed nature of the individual projects (only one-third of the reviewed here architectures are
open-source) and terminological differences.
Keywords: survey; cognitive architectures; perception; attention; learning; practical applications

1 Introduction
The goal of this paper is to provide a broad overview of the last 40 years of research in cognitive
architectures with an emphasis on perception, attention and practical applications. Although the field of
cognitive architectures has been steadily growing, most of the surveys published in the past 10 years do not
reflect this growth and feature essentially the same set of a dozen most established architectures. The latest
large-scale study was conducted in 2010 by Samsonovich et al. [1] in an attempt to catalog implemented
cognitive architectures. Their survey contains descriptions of 54 cognitive architectures submitted by their
respective authors. The same information is also presented online in a Comparative Table of Cognitive
Architectures1. There are also other listings of cognitive architectures, but they rarely go beyond a short
description and a link to the project site or a software repository.
Since there is no exhaustive list of cognitive architectures, their exact number is unknown, but it is
estimated to be around three hundred, out of which at least one-third of the projects are currently active. To
form the initial list for our study we combined architectures mentioned in surveys (published within the last
10 years) and several large online catalogs. We also included more recent projects not yet mentioned in the
survey literature. Figure 1 shows the visualization of 195 cognitive architectures featured in 17 sources
(surveys, online catalogs and Google Scholar). It is apparent that a small group of architectures including
ACT-R, Soar, CLARION, ICARUS, EPIC, RCS and LIDA is present in many sources, while all other
projects are only briefly mentioned in online catalogs. While the theoretical and practical contributions of


Figure 1 The combined list of cognitive architectures from surveys, online catalogs and Google Scholar. Nodes on
the left side represent surveys and online catalogs of architectures. Nodes on the right side represent individual
projects. The thickness of the node reflects the number of edges connected to it, thus the number of times an
architecture appears (on the right side) or the number of architectures included (on the left). The visualization is
created using

the major architectures are undeniable, they represent only part of the research in the field. Therefore, in
this review we want to shift the focus from detailed descriptions of heavyweights, which has been done
elsewhere, towards a high-level overview of the field.
To make this survey manageable we reduced the original list of architectures to 86 items. Since, our
main focus is on implemented architectures with at least one practical application and several peer-reviewed
publications, we do not consider some of the philosophical architectures (e.g. CogAff, Society of Mind,
Global Workspace Theory, Pandemonium theory, etc.). We also exclude large scale brain modeling
projects, which are low-level and do not easily map onto the breadth of cognitive capabilities modeled by
other types of cognitive architectures. Further, many of the brain models do not have any practical
applications yet, and thus do not fit the parameters of the present survey. Figure 2 shows all architectures
featured in this survey with their approximate timelines recovered from the publications. Of these projects
55 are currently active.
As we mentioned earlier, the first step towards creating an inclusive and organized catalog of
implemented cognitive architectures was made in [1]. This review contained extended descriptions of 26
projects with the following information: short overview, schematic diagram of major elements, common
components and features (memory types, attention, consciousness, etc.), learning and cognitive
development, cognitive modeling and applications, scalability and limitations. A survey of this kind brings
together researchers from several disjoint communities and helps to establish a mapping between the
different approaches and terminology they use. However, the descriptive or tabular format does not allow
easy comparisons between architectures. Since our sample of architectures is large, we experimented with
alternative visualization strategies, such as alluvial and circular diagrams which are frequently used for
organizing complex tabular data. Interactive versions of these diagrams allow to explore the data and view
relevant references.
In the following sections we will provide an overview of the definitions of cognition and approaches to
grouping cognitive architectures. As one of our contributions, we map cognitive architectures according to
their perception modality, implemented mechanisms of attention, memory organization, types of learning,
and practical applications. Other features of cognitive architectures, such as metacognition, consciousness
and emotion, are beyond the scope of this survey.
In the process of preparing this paper, we examined the literature widely and this activity led to a 2000
item bibliography of relevant publications. We provide this bibliography, with short summary descriptions
for each paper as supplementary material2.

2 What are Cognitive Architectures?
Cognitive architectures are a part of research in general AI, which began in the 1950s with the goal of
creating programs that could reason about problems across different domains, develop insights, adapt to
new situations and reflect on themselves. Similarly, the ultimate goal of research in cognitive architectures
is to achieve human-level artificial intelligence. According to Russell and Norvig [2] such artificial
intelligence may be realized in four different ways: systems that think like humans, systems that think
rationally, systems that act like humans, and systems that act rationally. The existing cognitive architectures
have explored all four possibilities. For instance, human-like thought is pursued by the architectures
stemming from cognitive modeling. In this case, the errors made by an intelligent system are tolerated as
long as they match errors typically made by people in similar situations. This is in contrast to rationally
thinking systems which are required to produce consistent and correct conclusions for arbitrary tasks. A
similar distinction is made for machines that act like humans or act rationally. Machines in either of these
groups are not expected to think like humans, only their actions or behavior is taken into account.

2 Links to supplementary materials, bibliography and interactive diagrams are available at

Newell’s criteria include flexible behavior. routineness and synergistic interaction.). Sun’s desiderata are broader and include ecological. etc. large knowledge base. so that the oldest architectures are plotted at the bottom of the figure. He also noted an uncertainty regarding essential dichotomies (implicit/explicit. each architecture was based on a different set of premises and assumptions. self-awareness and brain realization. NEUCOGAR MLECOG CORTEX ARCADIA RoboCog Spaun MusiCog STAR MIDCA CELTS CHARISMA MACSi DSO CoJACK CERA-CRANIUM ASMO Sigma SAL OpenCogPrime ARDIS iCub BECCA DiPRA Pogamut DIARC CARACaS HTM CSE ARS/SiMA Companions ADAPT Casimir MicroPsi GMU-BICA PolyScheme MDB Novamente R-CAST Cerebus MAMID REM SASE Ikon Flux Kismet APEX LIDA IMPRINT 3T Leabra CLARION Recommendation Ymir EPIC NARS Shruti OMAR GLAIR FORR CHREST ATLANTIS ISAC ICARUS DAC ERE COGNET Teton MAX DUAL Disciple RALPH PRODIGY Subsumption MIDAS PRS Homer Metacat Soar 4CAPS 4D-RCS ART ACT-R 1970 1975 1980 1985 1990 1995 2000 2005 2010 2015 Figure 2 A timeline of 86 cognitive architectures featured in this survey. with no clear definition and general theory of cognition. cognitive and bio-evolutionary realism. Each line corresponds to a single architecture. a quick look at the existing cognitive architectures reveals persisting disagreements in terms of their research goals. structure. which hinder progress in studying intelligence. Since the explicit beginning and ending dates are known only for a few projects. modularity of cognition and structure of memory. making comparison and evaluation difficult. Colors correspond to different types of architectures: symbolic (green). However. learning. the most prominent being Sun’s desiderata for cognitive architectures [3] and Newell’s functional criteria (originally published in [4] and [5]. procedural/declarative. linguistic abilities. development. Besides defining these criteria and applying them to a range of cognitive architectures. Several papers were published to resolve the uncertainties. we recovered the timeline based on the dates of publications and activity on the project web page or online repository. rationality. . However. emergent (red) and hybrid (blue). real-time operation. operation and application. modularity. and later restated by Anderson and Lebiere [6]). adaptation. Sun also pointed out the lack of clearly defined cognitive assumptions and methodological approaches. Architectures are sorted by the starting date.

reasons for considering them are less clear. memory. ACT-R. learning. reasoning. the former provide only a fixed model for general computation.g. LIDA [10]. control components. The criteria on which a software system can be called a cognitive architecture are also rarely addressed. motivation. There are also specialized architectures designed for particular applications. This view is also not common in the .Figure 3 A visualization of active symbolic. communication. AKIRA. However. planning and acting. [7] suggest 14 areas. attention. Artificial General Intelligence (AGI) is currently pursued by only a handful of architectures. perception of symmetry (the Cognitive Symmetry Engine [16]) or problem solving (FORR [17]. when it comes to less common or new projects. Unsurprisingly. or more specifically. even implementing a reduced set of abilities in a single architecture is a substantial undertaking. building/creation and arithmetic abilities. modeling self/other. EPIC. a framework that explicitly does not self-identify as a cognitive architecture [26]. STAR [14]). Adams et al. Similarly. he suggests that agent architectures built using toolkits and frameworks also cannot be considered cognitive architectures because of their lack of theoretical commitments. These are further split into subareas. and input/output devices. While all of them have memory storage. LIDA. a knowledge base Cyc [28]. several broad areas have been identified that may serve as guidance for ongoing work in the cognitive architecture domain. it may be more practical to define it as a set of competencies and behaviors demonstrated by the system. among which are Soar. which few architectures other than Soar and ACT-R would satisfy. Cognitive architectures. For example. and several recent projects. [22]. attention (ARCADIA [13]. Instead of looking for a particular definition of intelligence. namely. The maximum number of projects active at the same time is indicated on the plot. PRODIGY [18]). This is a rather restricting set of conditions. is featured in some surveys anyway [27]. the list of cognitive functions frequently mentioned in the recent publications on cognitive architectures. such as SiMA [11] and OpenCogPrime [12]. ACT-R. While no comprehensive list of capabilities required for intelligence exists. such as ARDIS [19] for visual inspection of surfaces or MusiCog [20] for music comprehension and generation. As an example. However. a proposal about the mental representations and computational procedures that operate on these representations enabling a range of intelligent behaviors ([21]. social interaction. There is generally no need to justify inclusion of the established cognitive architectures such as Soar. ICARUS and a few others. perception. emotion. NARS [9]. learning. includes only perception. Others focus on a particular aspect of cognition. Furthermore. For example. Arguably. [24]. [25]). some of these categories may seem more important than the others and historically attracted more attention. on the other hand. emergent and hybrid architectures for years 1973-2016. according to Metzler and Shea [8]. data representation. emotion (CELTS [15]). must change through development and efficiently use knowledge to perform new tasks. decision making. reasoning. e. Laird in [30] discussed how cognitive architectures differ from other software systems. Most of the surveys broadly define cognitive architectures as a blueprint for intelligence. planning. actuation. [23]. which does not make any claims about general intelligence is presented as an AGI architecture in [29].

For example. are dedicated to the theoretical problems related to recurrent and deep neural networks.mobileye. In 5 http://www. for example. Grid Long Short-Term Memory (Grid LSTM) solves this problem by providing an ability to dynamically select or ignore inputs to retain important memory contents during learning [35]. thus emphasizing the developmental nature of the process. cover a wider range of topics. an artificial ecosystem (or a “kindergarten”) is proposed for teaching an intelligent agent. Since current deep learning techniques are mainly applied to solving practical problems and do not represent a unified framework we do not include them in this review. attention and learning. a network should be able to perform chains of sequential computations. Their main argument is that AI is too complex to be built all at once and instead its general characteristics should be defined first. human-computer interaction and network security. Although particular models already demonstrate cognitive abilities in limited domains. given their prevalence in other areas of AI. Two such characteristics of intelligence are defined. striving both for inclusiveness and consistency: self-identification as cognitive. Recently. However. Blocks World simulation. Unlike DeepMind.g. a framework for building intelligent agents.facebook. Facebook AI Research (FAIR4) and other companies are actively working in this direction. but overall research topics pursued by FAIR align with their proposal for AI and also the business interests of the company. since in order to find and exploit complex patterns in data. As a first step in this 4 https://research. On the other hand. an attention-based model for recognizing multiple objects in the image (e. and a concrete roadmap is proposed for developing them incrementally. Currently there are no publications about developing such a system. existing implementation (not necessarily open-source). The plan is to start with a simpler simulated environment and gradually increase its complexity. Likewise. at this point they do not represent a unified model of intelligence. namely. Authors acknowledge that similar ideas have been tried in the past (e. neural networks are also used for building sophisticated models of visual attention and memory. natural language processing. In order to include some of the newer architectures still under development some of these conditions were relaxed. DeepMind research addresses a number of important issues in AI. such as natural language understanding. PRS and ERE are reviewed in [31] and Pogamut. Overall. 3 https://deepmind. the publications by DeepMind. is included in [1]. For the rest of the architectures we defined the following selection criteria. not advertised in media. Common topics include visual processing. communication and learning. but relied too much on the data provided by their creators. deep learning methods will likely play some role in the cognitive architectures of the future. Memory is of special importance in the deep learning domain. the Facebook research team explicitly discusses their work in the broad context of developing intelligent machines in [36]. robotic or agent architecture.survey literature. claims have been made that deep learning is capable of “solving AI” by Google (DeepMind3). Many papers. Where does this work stand with respect to cognitive architectures? At present. for instance. and mechanisms for perception. in deep networks the information from past computations can be affected by new information. Given the emphasis on communication and learning.g. a primary application of such an intelligent machine would be an electronic assistant. agent architectures 3T. However. some of the most publicized achievements of deep learning include vision processing for self-driving cars (Mobileye5) and Google’s neural networks capable of playing Go [32] and multiple video games [33]. frequently used by symbolic architectures for learning). until eventually it can connect the artificial agent to the real world. house number sequences) [34].com/ . especially segmentation and object detection. data mining. general learning. and strategies for evaluating artificial intelligence. perceptual processing. memory. To further limit the scope of the survey we required at least several peer-reviewed papers and existing applications beyond simple illustrative examples. where agent architectures and toolkits used to build them are also included.

The emergent approach resolves these issues by building massively parallel models. there are no restrictions on how this hybridization is done. 4CAPS combines a traditional symbolic production system interpreter with connectionist computational mechanisms such as thresholds. As mentioned earlier. For example. In order to evaluate the architectures. Although by design. A good example of the latter is Leabra. It has been used in other surveys as well ([38]. versatility and autonomy are proposed. criteria such as generality. they are usually too fine-grained to be applied to a generic architecture. weights. logical inference in a traditional sense becomes problematic in emergent architectures. which include embodiment. Similarly. While many of these criteria could be used to classify the architectures. they lack the flexibility and robustness that are required for dealing with a changing environment and for perceptual processing. action. Because it is a natural and intuitive representation of knowledge. perception. On the other hand. Langley et al. A more fine-grained classification of hybrid connectionist-symbolic models is presented in [44]. [31] define a comprehensive list of capabilities. [21] to include typical memory and learning properties for each group (Figure 4). decision making. learning. CLARION uses different representations depending on the type of knowledge: explicit factual knowledge is symbolic. each paradigm has its strengths and weaknesses. any symbolic architecture requires a lot of work to create an initial knowledge base. activations. As neither paradigm is capable of addressing all major AI issues. Three categories are defined: symbolic (also referred to as cognitivist). underlying model and problem-solving) and Thorisson and Helgasson [27] determine their level of autonomy based on 4 criteria (real-time operation. emergent architectures have gained more weight only relatively recently. For example. [5]. 3 Taxonomies of Cognitive Architectures Many papers published within the last decade address the problem of evaluation rather than the categorization of cognitive architectures. modularity. For similar reasons. emergent architectures are easier to design. acting. but some approaches may be more intuitive and easier to implement. Similarly. learning. For example. but once it is done the architecture is fully functional. but they must be trained in order to produce useful behavior. [37] lists 12 characteristics to compare cognitivist and emergent approaches. which uses a localist representation for labels and distributed representation for features when learning invariant object detection [42]. Along the same lines Vernon et al. attention and meta-learning). planning. memory. symbolic systems excel at planning and reasoning. This taxonomy based on information processing was extended by Duch et al. However. [38] evaluate architectures based on 7 criteria (perception. hybrid architectures attempt to combine elements of both symbolic and emergent approaches. emergent (connectionist) and hybrid. adaptation. prediction. Asselman et al. among others. symbolic manipulation remains very common. [21]. In general. communication and learning. Thus it is more common to group cognitive architectures based on the type of information processing they represent. the resulting system also loses its transparency. where information flow is represented by a propagation of signals from the input nodes. The suggested set of cognitive capabilities includes recognition. although research in this direction has been active for at least as long as traditional AI (Figure 2). autonomy and others. goal setting. They also define an alternative . analogous to neural networks. [39]). Interestingly. In [43] the two types of hybridization are called a horizontal and vertical integration respectively. Furthermore. Naturally.a localist-distributed approach. [6]) and Sun's Desiderata [3] belong in this category. motivation. . This type of hybridization is termed symbolic- connectionist by Duch et al. their existing knowledge may deteriorate with the subsequent learning of new behaviors. since knowledge is no longer a set of symbolic entities and instead is distributed throughout the network. Newell’s criteria ([4]. and parallel processing [41]. while procedural implicit knowledge is subsymbolic [40]. Symbolic systems are often implemented as a set of if-then rules (also called production rules) that can be applied to a set of symbols representing the facts known about the world. perception. properties and evaluation criteria for cognitive architectures.

However. we do not attempt to create a single organizational scheme to fit them all. [21] almost a decade ago.e. The taste modality is only featured in BBD robots and is simulated by the conductivity of the object [48]. Other sensory modalities that do not have a correlate among human senses are symbolic input (using keyboard or graphical user interface (GUI)) and various sensors such as LADAR. memory and learning. we extract the relevant data from the publications and group it by frequency. namely perception. attention. it is not surprising that such architectures already are the most numerous with the tendency to grow even more (Figure 3). emergent and hybrid architectures. Figure 7 shows physical and virtual sensors used by cognitive architectures and their corresponding human senses: vision. an intelligent system cannot exist in isolation and requires an input to produce behavior. smell and proprioception. hearing. Figure 5 Taxonomy by Sun from [45] Figure 4 Taxonomy by Duch et al. There are not many alternative taxonomies in the literature. The amount and type of incoming perceptual data depends on the type and number of sensors available and cognitive capability of the intelligent system. since the backgrounds of the architectures presented here span research areas from philosophy to neurobiology. IR. . However. Thus. Other classification schemes are very specific. the one by Gray [46]. i. 4 Perception Regardless of its design and purpose. which are often not available. from [21] Figure 6 Taxonomy by Gray from [46] Langley et al. which focuses on the purpose and use of architectures (Figure 6). Perception is a process that transforms raw input into the system's internal representation for carrying out cognitive tasks.g. what it can derive from this data. e. A scheme by Sun [47] (Figure 5) emphasizes modularity and communication between the modules. Instead for each broad cognitive function. in the present survey we follow the traditional distinction between symbolic. following this taxonomy requires implementation details for each architecture. touch. laser. Given the advantages of the hybrid approach. our data confirms a prediction made by Duch et al. Therefore. etc. [22] do not include connectionist models in their survey since they do not demonstrate the same functionality as symbolic and hybrid models.

visual or any other information. smell and proprioception.). The “symbolic input” category means that input to the cognitive architecture is provided as text or through GUI. Architectures are divided into three groups: symbolic (green). The “sensor” category includes various sensors that do not correspond to any human senses. emergent (red) and hybrid (blue). simulations. the ribbons are drawn as more opaque. Visualization is created with www. . touch. Subcategories of the sensory modalities are shown as bands within the sectors (e. monocular cameras. Other sectors of the circle correspond to different sensory modalities. and are located at the bottom of the diagram. This does not include text input simulating audio. in vision subcategories include Kinect.g. Ribbons are drawn between the sensory modalities and architectures that implement them. A list of perceptual modalities for each cognitive architecture with relevant references is included in supplementary materials and in the interactive version of the diagram. etc. If an architecture implements several sensory modalities. [49]. Figure 7 Diagram showing sensory modalities of cognitive architectures. which include vision.circos.

odometry and bumpers) are used for solving visual tasks such as navigation. Similarly to LVis. a planetary rover controlled by ATLANTIS performs cross-country navigation in a rough rocky terrain [50]. recurrent inhibitory dynamics that limit activity levels across layers. The SASE architecture implements a Staggered Hierarchical Mapping (SHM) for low-level visual feature processing. visual input accounts for more than half of all possible input modalities. the ATLANTIS robot architecture constructs a height map from stereo images and uses it to identify obstacles [58]. simple environments with no clutter or obstacles are also used in cognitive architectures research (BECCA [53]. Although it does not replicate the structure of the human visual pathways. For example. With this system on board Darwin VIII is able to segment a scene and categorize visual objects (simple shapes and colors) [67]. Leabra is able to distinguish dozens of object categories from synthetic images with noise and occlusion [66]. DIARC uses a neural network for face detection. Kismet. it uses optical flow and model-based segmentation to identify the moving objects in the scene [59]. laser readings for leg detection [60] and least-squares regression over SIFT features for object recognition [61]. uses simple motion detection. One of the most elaborate examples is the Leabra vision system called LVis [65]. Fewer systems incorporate biologically plausible vision. the receptive fields of neurons become less location specific and more featurally complex in the higher levels of the hierarchy. Then. All layers are reciprocally connected. In addition. A typical task in social robotics is detecting people and their gestures. The cognitive architecture for the child-like robot iCub utilizes SURFfeatures. Cognitive architectures that operate in realistic unstructured environments usually approach vision as an engineering problem and construct chains of standard computer vision techniques for a particular situation. SHM was tested on a SAIL robot in an indoor navigation scenario [69]. Vision Vision is the most represented sensory modality. allowing the higher- level information to influence the bottom-up processing during both the initial learning and subsequent recognition of objects. which is based on the anatomy of the ventral pathway of the brain: the primary visual cortex (V1). color-coding objects is a common way of simplifying visual processing. The CARACaS architecture also computes a map for hazard avoidance from stereo images and range sensors. skin tone feature map and neural networks to find the faces and eyes of its caregivers [64]. and common vision techniques are frequently used. Among its many applications. In addition. RoboCog and CORTEX use camera and a proprietary software from Kinect to find people in a robotic salesman scenario [63]. and contain local. On the other hand. extrastriate areas (V2. salesman robot Gualzru (CORTEX [51]) moves around a large room full of people and iCub (MACsi [52]) recognizes and picks up various toys from the table. Even though the fact that in robotics various non-visual sensors and proprioception (e. This is evident from the diagram in Figure 7. ADAPT tracks a red ball rolling on the table [55] and DAC orients itself towards targets marked with different colors [56]. For instance. obstacle avoidance and search. where each neuron gets input from a restricted region in the previous layer. SHM is a hierarchical neural network with localized connections. Here too a combination of input from camera and sensors. V4) and the inferotemporal (IT) cortex. . a social robot. AdaBoost learning and a mixture of Gaussians for hand detection and tracking [62]. neurons in successive areas have progressively larger receptive fields. local binary patterns and a SVM classifier are trained to recognize particular persons and infer their gender and age [51]. obstacle avoidance in navigation tasks is solved in the same manner as in robot control architectures: 4D-RCS reconstructs the scene from stereo images and applies machine learning to LADAR data and color histograms to find traversable ground [57]. For example. As in the human visual system. The visual system of Darwin VIII (BBD) is also modeled on the primate ventral visual pathway. For instance.g. The sizes of receptive fields within one layer are the same and increase in higher levels [68]. Similarly. MDB [54]). where the corresponding circle segment is the largest. Real environments for embodied cognitive architectures vary both in size and visual complexity.

The first is Family and Friends. for example. On the other hand. Sound or voice commands are typically used to guide an intelligent system or to communicate with it. Is it either of these? User: The first one ASR: . Some simulations provide pixel-level data. dedicated software is used for speech processing. Given that deep learning methods are becoming increasingly popular in computer vision it is surprising to see that not many cognitive architectures employ them. most architectures resort to using available speech-to-text software rather than develop models of audition. This simulation is represented by 100 categories of objects rendered in 3D with various illumination levels and partial occlusions. locations and properties of the agent itself and sometimes environmental factors (e. Otherwise. tend to include specialized pipelines for each scenario. Among the few architectures modeling auditory perception are ART. shape. label. weather). Robot: "What is the thing to the right of the blue square?" Human: "It is a red triangle. FX2: Let’s try something else." The FORR architecture uses speech recognition for the task of ordering books from the public library by phone.FIRST. etc. Overall. For example.g. which helps to achieve a high degree of complexity and realism. their properties (color... ARTWORD and ARTSTREAM were used to study integration [76] and a model of music interpretation was developed with ACT-R [77].FRIENDS. the one used for the experiments with Leabra [42]. Novamente [74]. simulations usually provide the same data: objects. Other biologically inspired neural networks are ART [70] and HTM [71] which are used for various visual recognition and classification tasks. The second is Family Happiness. which is becoming easier with the widely available software toolkits such as OpenCV or Kinect API. the visual realism of the simulation usually serves purely aesthetic purposes (CoJACK [73]. Despite varying visual complexity and realism. The architecture in this case increases the robustness of the automated speech recognition system based on an Olympus/RavenClaw pipeline (CMU) [80]. Leabra or BBD. FX2: I have two guesses. the architectures that work with realistic visual input. As an illustration. which processes visual input in the OpenCogPrime architecture [72].5 software for speech processing [78] and can have a meaningful conversation in a subset of English. A salesman robot controlled by the CORTEX architecture can understand and answer questions about itself using a Microsoft Kinect Speech SDK [51]. Since the auditory modality is purely functional.NEXT. such as. consider a typical dialog from Playmate [79]: Human picks up the red square and puts down a red triangle to the right of the blue square. Is the full title Family and Friends? User: Yes . FX2 is the automated system.). the user is a person calling and ASR shows the results of the speech processing: FX2: What title would you like? User: Family and Friends ASR: FAMILY." Robot: "Ok. the systems that try to model more general purpose and biologically plausible visual systems. In this sample interaction. More commonly. as is demonstrated by the following example. ACT-R and EPIC. Audition Audition is less commonly implemented in cognitive architectures. The Playmate system based on CoSy uses a Nuance Recognizer v8. are not as efficient and their applications are limited to controlled environments. typically in robotics. Simulations are also commonly used as an alternative to visual processing. One example is a deep learning architecture DeSTIN. Pogamut [75]).

where they are connected to particular sensors and have priority over any other action. and is primarily used in the categorization and classification applications (e. Some attempts in this direction are made in social robotics. which may also be indirectly affected by external attention. Some recent architectures. They are mainly used in human performance research to simplify input of the expert knowledge and to allow multiple runs of the software with different parameters (IMPRINT [89]. loudness. the BBD robots can orient themselves toward the source of a loud sound [67]. prohibition or soothing based on the prosodic contours of the speech [84]. we find that most effort is directed at the linguistic and semantic information carried by speech and less attention is paid to the emotional content. External or perceptual attention selects and modulates information incoming from various senses and internal attention modulates internally generated information. such as DIARC. Here external (perceptual) attention is subdivided into region of interest (ROI) selection. and intonation. reactive actions and dynamic action selection.g. for example. MAMID [90]. gaze control. This feature is useful for safety in robotics (e. transient) and endogenous (top-down) attention. [95] we consider two major categories of the attention mechanisms . Even the sound itself can also be used as a cue. Although many architectures have tools for visualization of results and the intermediate stages of computation. ASR: YES Other examples of systems using off-the-shelf speech processing software include the Polyscheme using ViaVoice [81] and ISAC with a Microsoft Speech engine [82].g. ART [94]). however. These include input in the form of text commands and data.g. OMAR [91].g. top-down (task-driven) and bottom-up (data-driven).. aim at supporting more naturally sounding requests like “Can you bring me something to cut a tomato?”. and through (GUI). OSCAR [86]. Planning refers to a traditional planning approach common in many symbolic architectures. Figure 8 shows common types of external and internal attention mechanisms. Predefined scenarios are usually represented by a predefined set of actions to perform (e. so no additional parsing is required. External attention here includes both top-down and bottom-up attention and any other mechanisms that involve perception.external and internal. For example. Internal attention involves only the internal action section mechanisms (such as emotions and drives). A virtual agent Gendalf controlled by Ymir architecture also has a prosody analyzer together with a grammar-based speech recognizer that can understand a limited vocabulary of 100 words [85]. Text input is typical for the architectures performing planning and logical inference tasks (e. In our sample of architectures. Task selection is represented by three broad categories: predefined scenarios. but can determine approval. the robot Kismet does not understand what is being said. interactive GUIs are less common.g. 5 Attention Following Chun et al. The data input can be in text or any other format. such as the contents of working memory or a set of possible behaviors in a given context6. stop the robot if its bumper sensors touch an obstacle). replicate an experiment) or in the form of finite state machine. Reactive actions are typical for robotic systems. they are still in the early stages of development [83]. Text commands are usually written in terms of primitive predicates used in the architecture. planning. Homer [88]). . R-CAST [92]). dynamic action selection means that the next action depends 6 Note that external and internal attention are not the same as exogenous (bottom-up. Symbolic input The symbolic input category in Figure 7 combines several input methods which do not fall under physical sensors or simulations.. HTM [93]. It is easy to notice that most human-robot interactions have to follow a certain script in order to be successful. CSE [16]. Here actions are selected automatically based on the computed plan. Finally. e. MAX [87]. speech rate. NARS [9].

densitydesign. but otherwise visual attention is much more common. biases due to emotions and drives. Several architectures can also modulate auditory data (OMAR [98]. Similar hierarchical decision making process is implemented in RCS [97]. FORR architecture integrates reactivity. Some architectures resort to the classical visual saliency algorithms. The bottom-up attentional mechanisms identify salient regions whose visual features are distinct from the surrounding image situation-based behavior.raw. or the Itti-Koch-Niebur model [104] used by ARCADIA . and heuristic reasoning in a three-tiered hierarchical model [96]. such as color channels. on multiple dynamic factors. Note that all these mechanisms are not exclusive and many architectures implement several of them at once. previous experience. so auditory data would be treated differently from the visual input. For example. The selection of visual data to attend can be data-driven (bottom-up) or task-driven (top-down). etc. such as the available sensory information. usually along a combination of dimensions. motion. such as Guided Search [101] used by ACT-R [102] and Kismet [103]. urgency. iCub [99] and MIDAS [100]).Figure 8 Diagram showing types of attention implemented in cognitive architectures. etc. External attention A particular implementation of perceptual attention would differ depending on the sensory modality. Graphics created with www. edges.

). etc. CHREST looks at typical positions on a chess board [113] and MIDAS replicates common eye scan patterns of pilots [100]. The goals may be suspended or canceled if a more urgent goal is pushed onto the stack (e. iCub [99] and DAC [105]. LIDA [133]. ASMO [119]). as well as engineering factors (e. the color red) can narrow down the options provided by the data-driven figure-ground segmentation. Attention could be a separate process. CERA- CRANIUM [131]. ERE [118].g. The cognitive architectures that run in a single thread or sequentially usually do not require a specific attention mechanism to select the next action or task. For instance. urgency or previous success (e. internal attention selects an appropriate action from a set of possibilities. In addition to the perceptual evidence and past experience. For example. MLECOG [134]) or Society of Mind by Minsky [135] (Cerebus [136]). The processes above threshold are then brought to attention automatically (CERA-CRANIUM [138]. Pogamut [125]. Ikon Flux [127]. Since only one goal can be attended at a time. where only a select few processes reach consciousness (associated with attention). despite their functional similarity. it is decomposed into subgoals. CERA-CRANIUM [111]. programming language. If the current goal cannot be reached. Dynamic attention control is also efficient in architectures that simulate cognition as multiple concurrent processes. they are popped off the stack and the parent goal becomes the new focus of attention. DAC [105]). Other approaches include filtering (DSO). CHARISMA [112]. Although few systems besides Subsumption [116] are purely reactive. APEX [109]. all architectures featured in this review have memory systems that store intermediate results of computations. Since at every cognitive cycle the new sensory information may change the ranking of existing options or add new ones to the mix. Top-down attention can be applied to further limit the sensory data provided by the bottom-up processing. A more flexible solution is to dynamically rank available actions based on their relevance to the current goal. such as biological plausibility. enabling learning and adaptation to the changing environment. Another option is to use a hard-coded or learned heuristics. these systems should theoretically be more adaptive. hardcoded actions for particular scenarios. [115]. other internal factors such as emotions or drives may also affect where the focus of attention will be moved next (ARS/SiMA [11]). This approach is used in ICARUS [120]. there are two approaches to modeling memory: 1) introducing distinct memory stores based on duration (short. finding unusual motion patterns (MACsi [106]) or discrepancies between observed and expected data (4D-RCS [107]). as in CELTS [137]. evaluating each running process and selecting relevant ones. the one on top of the stack is always satisfied first.[13].and long-term) and type . CLARION [124]. COGNET [129]).. ARCADIA [110]. When all subgoals are successfully resolved. software architecture. which filters perceptual information. bumpers on a robot signaling a collision). Internal attention Unlike external attention. Many architectures resort to this mechanism to improve search efficiency (ACT-R [108]. However. For example. It happens automatically by arranging goals in a stack or queue.g. however ongoing research in STAR attempts to address this problem [114]. NARS [126]. R-CAST [128]. ACT-R [121].g. knowing desired features of the object (e. LIDA [139]). The simplest mechanism of selection is reactive. such as collision avoidance.g. Broadly speaking. PRS [117]. Such architectures are based on the Global Workspace Theory of Baars [130] (ARCADIA [13]. 6 Memory Memory is an essential part of any systems-level cognitive model. AIS [122]. the particular implementations of memory systems differ significantly and depend on the research goals and conceptual limitations. which are pushed onto the stack and attended sequentially. are common in robotic architectures (e. CELTS [132].. use of frameworks. regardless of whether the model is being used for studying the human mind or for solving engineering problems. in visual search. or values could be updated as a result of interaction with other processes. IMPRINT [123] and other primarily symbolic planners.g. The limitation of the current top-down approaches is that they can direct vision for only a limited set of predefined visual tasks.

The long-term knowledge in planners is usually referred to as a knowledge base for facts and a set of problem-solving rules. which correspond to semantic and procedural long-term memory (e. imitating episodic memory (REM [147]. (procedural.and long-term memory storage systems. For instance. The short-term storage in planners is usually represented by a current world model and a contents of the goal stack.g. Some architectures also save previously implemented tasks and solved problems. MACsi [106]. although the naming conventions differ depending on the conceptual background. PRS [144]. etc.densitydesign. The first approach is influenced by the multi-store memory model of Atkinson-Shiffrin (1968) [140]. ATLANTIS [146]. Disciple [143]. Although these theories are dominant in psychology. semantic. Nevertheless. most architectures distinguish between various memory types.raw. IMPRINT). their validity for engineering is questioned by some because they do not provide a functional description of various memory mechanisms [142]. PRODIGY [148]). and 2) a single memory structure representing several types of knowledge.Figure 9 Diagram showing various types of short-term (STM) and long-term (LTM) memory implemented in cognitive architectures. .). ARDIS [145]. but do not use terminology from cognitive psychology. later modified by Baddeley (1976) [141]. the architectures designed for planning and problem solving have short. Graphics created with www.

For example. it is divided into procedural memory of implicit knowledge (e. working memory is a relatively small temporary storage. EPIC [150]. A simpler solution is to set a hard limit on the number of items in memory. The CELTS architecture implements this principle by assigning an activation level to percepts that is proportional to the emotional valence of a perceived situation. LIDA [151] and Pogamut [125]. and Companions utilize working memory as a cache for intermediate results [156]. However. For instance. Long-term memory Long-term memory (LTM) preserves a large amount of information for a very long time. In the Recommendation Architecture the limit of 3-4 items in working memory emerges naturally from the structure of the memory system [165]. and delete the oldest or the most irrelevant item to avoid overflow. for example 3-6 objects in ARCADIA [110]. Long-term storage is further subdivided into episodic. Based on the publications we reviewed. Short-term memory Only a few architectures implement short-term or sensory memory. STM is a very short-term buffer that stores several recent percepts.g. in APEX. the percept is discarded [160]. The Novamente Cognitive Engine has a similar mechanism. Typically. In all these architectures the sensory buffer preprocesses and stores recently seen (or heard) information from tens to hundreds of milliseconds. its capacity should be limited. semantic working memory is represented by an assertional database with timestamped propositions [153]. In principle. A more common approach is to gradually remove items from the memory based on their recency or relevance in the changing context. etc. It is still unclear if under these conditions the size of working memory can grow substantially without any additional restrictions. Short-term storage is split into short-term memory (STM) and working memory (WM) following [149]. any computation inherently requires a temporary storage for partial and intermediate data. where atoms stay in memory as long as they build links to other memory elements and increase their utility [161]. MusiCog implements echoic memory. It is also referred to as perceptual or sensory memory in some architectures. BECCA keeps a weighted combination of several recently attended features and any recent actions [154]. Iconic visual memory is part of ACT-R [102]. motor skills and routine behaviors) and . Since. CERA-CRANIUM [131]) to tens of seconds (LIDA [164]). Items can also be discarded if they have not been used for some time. there is no agreed upon way of how this should be done. which is a buffer for temporarily holding the incoming sensory data before it is transferred to other memory structures. factual knowledge and information on what actions should be taken under certain conditions respectively. The exact amount of time varies from 4-9 seconds (EPIC [150]) to 5 sec (MIDAS [163]. by definition. in MAMID working memory contains a currently activated set of instructions [155]. therefore every cognitive architecture on our list implements working memory to some extent. Similar definitions are given for DSO [157]. in GLAIR the contents of working memory are discarded when the agent switches to a new problem [159]. DIARC [61]. a sensory memory for auditory signals [152]. ARCADIA [110]. Working memory is a temporary storage for percepts that also contains other items related to the current task and is frequently associated with the current focus of attention. semantic and procedural types. Figure 9 shows a visualization of various types of memory implemented by the architectures. for biological realism. which store episodes from the personal experience of the system. Working memory Working memory is defined as a mechanism for the temporary storage of information related to the current task. This activation level changes over time and as soon as it falls below a set threshold. 4 chunks in CHREST [162] or up to 20 items in MDB. Here we primarily distinguish between the long-term and short-term storage. many architectures implement working memory as a dynamic storage with no explicitly defined capacity. CoSy [158].

data structures and the algorithms used for implementing the architecture. HTM [183]. [195]). result” and uses them to bias future behavior [188]. where procedural and declarative memories are separate and both subdivided into an implicit and explicit component. SHRUTI [182]. procedural knowledge is represented by a set of if-then rules preprogrammed or learned for a particular domain (3T [167]. Ultimately.vs long-term memory. while explicit knowledge has a transparent symbolic representation [166]. Besides. The dichotomies between the explicit/implicit and the declarative/procedural long-term memories are usually merged. procedural and attentional. where nodes correspond to concepts and links represent relationships between them (Casimir [176]. Other variations include sensory-motor schemas (ADAPT [173]). etc. Soar [171]. such as design paradigm (e. CHREST [180]). any kind of learning is based on experience. ARDIS [145]. Semantic memory stores facts about the objects and relationships between them. Thus. CORTEX and RoboCog use an integrated. procedural memory may contain sequences of state- action pairs (BECCA [53]) or ANNs representing perceptual-motor associations (MDB [175]). which contains (explicit) knowledge.g. Soar [171]. task schemas (ATLANTIS [146]) and behavioral scripts (FORR [174]). In the architectures that support symbolic reasoning. For example. NARS represents all empirical knowledge. application scenario. Other examples include R-CAST [191]. ART [184]). In symbolic production systems. OMAR [186]. not all of these pieces of information can be easily found in the publications. For example. a more general summary is preferable. One of the few exceptions is CLARION. DiPRA uses Fuzzy Cognitive Maps to represent goals and plans [196]. output. semantic knowledge is typically implemented as a network-line ontology. episodic or procedural. In emergent architectures factual knowledge is represented as patterns of activity within the network (BBD [181]. we will not attempt to analyze all these aspects given the diversity and number of the cognitive architectures surveyed. ACT-R [168]. MIDAS [179]. CLARION saves action- oriented experiences as “input. Disciple [178].declarative memory. EPIC [169]. Episodic memory stores specific instances of past experience. The type of learning and its realization depend on many factors. APEX [172]). 4CAPS [41]. declarative. 7 Learning Learning is the capability of a system to improve its performance over time. Perceptual learning involves improving skills for processing . iCub [187]. and MLECOG gradually builds 3D scene representation from perceived situations [189]. The latter is further subdivided into semantic (factual) and episodic (autobiographical) memory. biological. and instead use a unified structure to store all information in the system. SAL [170]. Long-term memory is a storage for innate knowledge that enables operation of the system. Similarly. dynamic multi- graph object which can represent both sensory data and high-level symbols describing the state of the robot and the environment ([51]. MAMID [190] saves the past experience together with the specific affective connotations (positive or negative). Soar [192]. BECCA stores sequences of state-action pairs to make predictions and guide the selection of system actions [53]. a system may be able to infer facts and behaviors from observed events or from results of its own actions. therefore almost all architectures implement procedural and/or semantic memory. as formal sentences in Narcese [197]. some architectures do not have separate representations for different kinds of knowledge or short. psychological). Procedural memory contains knowledge about how to get things done in the task domain. which affect the likelihood of selecting similar actions in the future. where four types of learning are defined: perceptual. Ymir [85]). However. In emergent systems. Novamente [193] and Theo [194]. These can later be reused if a similar situation arises (MAX [185]. Cerebus [177]. This distinction is preserved on the level of knowledge representation: implicit knowledge is captured by distributed subsymbolic structures like neural networks. However. these experiences can also be exploited for learning new semantic or procedural knowledge. regardless of whether its declarative. Despite the evidence for the different memory systems.

Procedural learning enables modifying existing behaviors or generating new ones by committing to memory successful actions to be reused later (specialized behaviors) or by extending known procedures to new circumstances (generalized behaviors). perceptual data. where a system finds associations between percepts and behaviors. FORR [198].raw. . categorization. YMIR MAX ROBOCOG R-CAST COMPANIONS REM PRODIGY ERE CORTEX CELTS ICARUS CARACAS THEO 4D-RCS TETON MDB MICROPSI AIS FORR RALPH COJACK RECOMMENDATION MLECOG DIPRA BECCA DAC SASE ISAC CSE MUSICOG LIDA behavior adaptation BBD GMU-BICA specialized behavior ACT-R general behavior CLARION SOAR procedural statistics of use CHREST emotions and drives SAL reinforcement NARS attentional ASMO sensorimotor OPENCOGPRIME perceptual mapping ADAPT DISCIPLE patterns MACSI declarative object recognition DIARC object-word mapping NOVAMENTE n/a ICUB chunking COSY new concepts from experience LEABRA HTM n/a ART SHRUTI POGAMUT DUAL DSO CHARISMA CASIMIR SUBSUMPTION STAR SIGMA PRS POLYSCHEME OSCAR OMAR NEUCOGAR MIDCA MIDAS METACAT MAMID KISMET IMPRINT IKONFLUX HOMER GLAIR EPIC COGNET CEREBUS CERA-CRANIUM ATLANTIS ARS/SIMA ARDIS ARCADIA APEX 4CAPS 3T Figure 10 Diagram showing learning mechanisms implemented in cognitive architectures. percept> associations. building spatial maps or finding any sort of patterns that could be further exploited. This type of learning has been used for navigation and for learning spatial affordances from sensory data (e.g. reward mechanisms (various kinds of reinforcement learning) or via emotions and drives. discrimination. followed by fitting a model to the accumulated data. Knowledge is usually obtained in two stages: initially random actions are performed to accrue experience in the form of <action. attentional learning allows improvement in the way the system modifies the action selection rules through accumulating statistics of previous successes and failures of completed actions. iCub [199]). Declarative learning expands semantic knowledge about the world. in production systems chunks are automatically added to memory as each goal is completed). More involved cases combine several types of learning at once. the explicit knowledge of facts and relations between them. specifically. A typical example is sensorimotor learning. Graphics created with www. such as recognition. This knowledge may be learned from experience in the process of solving a problem (e.

SiMA). MACsi [106]. The latter can be used within the same sensory modality as in the case of the agent controlled by Novamente engine.). CHREST [209]. The same applies to reasoning tasks. Perceptual learning Although many systems use pre-learned components for processing perceptual data. NARS [213]). etc. which selects a picture of the object it wants to get from a teacher [205]. AIS [201]. Procedural learning Procedural learning allows an intelligent system to acquire new behaviors or modify existing ones. In some areas of research learning is not necessary. RALPH [220]. learning new declarative knowledge is automatic. CARACaS [222]. a new chunk is added to declarative memory.g. in a navigation task. MAMID. Figure 10 shows a visualization of the learning types for all cognitive architectures. such as spatial maps (4D-RCS [200]. in this case previously generated plans or solutions would be added to memory (Companions [216].g. BECCA [204]. DIARC [215]). This kind of learning is frequently performed to obtain implicit knowledge about the environment. Many examples of procedural learning using episodic learning have been described in the previous chapter. Perceptual learning applies to architectures that actively change the way sensory information is handled or how patterns are learned online. For instance. etc. however. MIDAS. Learning can also occur between different modalities. The simplest way of doing so is by accumulating examples of successfully solved problems to be reused later. where accurate replication of human performance data is required instead (e. Leabra [42]. RoboCog [217]). New symbolic knowledge can also be acquired by applying logical inference rules to already known facts (GMU-BICA [211]. the robot based on SASE architecture learns association between spoken command and action [69] and Darwin VII (BBD) learns to associate taste value of the blocks with their visual properties [206]. In production systems such as ACT-R. FORR [223]. BBD [221]. Here the 4 types of learning are further subdivided into categories and linked to the architectures implementing them. Soar and others. Emotions . ICARUS [224]. Automatic acquisition of knowledge has also been demonstrated in systems with distributed representations. each time a goal is completed. In the biologically inspired systems learning new concepts usually takes the form of learning the correspondence between the visual features of the object and its name (iCub [214]. Casimir [176]. which implement chunking mechanisms (SAL [207]. CLARION [40]). clustering visual features (HTM [203]. The utility of a rule or action could be inferred from statistics of the past applications (Theo [194]. 4D- RCS [219]. although reward-driven (reinforcement) learning is far more common and is not restricted to biologically inspired systems (e. for example. this type of learning is very limited and further processing of accumulated experience is needed for better efficiency and flexibility. for instance. such as object and face detectors or classifiers. a traversed path could be saved and used again to go between the same locations later (AIS [201]). Disciple [212]. Novamente [205]. NARS [218]). EPIC. Leabra [42]) or finding associations between percepts. we do not consider these here. CoSy [79]. Obviously. There are also 22 architectures that do not implement any learning. in human performance modeling.). MicroPsi [202]). ARCADIA. Attentional learning Attentional learning affects the action selection process by adjusting the relative priority of the actions or concepts according to their experienced usefulness and relevance. Some of the newer architectures are still in the early development stage and may implement learning in the future (e.g. That is to say. in DSO knowledge can be directly input by human experts or learned as contextual information extracted from labeled training data [210]. APEX. Declarative learning Declarative knowledge is a collection of facts about the world and various relationships defined between them. IMPRINT. ADAPT [208]. For example.

in the case of Soar that would be game playing. solving puzzles and logical reasoning in various domains. Human Performance Modeling (HPM) Human performance modeling is an area of research concerned with building quantitative models of human performance in a specific task environment. APEX. natural language processing (NLP) and miscellaneous. which included projects not related to any major group but too rare to be separated into a group of their own. The only exception from the rule are psychological experiments. etc. so that empirical assessment is infeasible or too costly.and drives are additional factors regulating attention of the intelligent agent. although the associations between emotions and actions are usually predefined. ACT-R model of Tower of Hanoi compared to human fMRI data [228] belongs to games and puzzles and psychological experiments. To avoid overcomplicating the diagram. For example. fMRI and EEG experiments. however there are also projects in this group dedicated to sense disambiguation and sentence comprehension in general. including OMAR. Common civil applications include models of air traffic control task (e. robotics. [235]. telephone operators and air traffic operators.g. which also include psychophysiological. Some applications belong to more than one group. Depending on . for example. After a thorough search through the publications we identified more than 700 projects implemented using 86 cognitive architectures. The projects in this group are the particular psychological experiments (e. COGNET [232]).g. it is still appropriate to talk about their practical applications. workload analysis of Apache helicopter crew [229]. All applications were split into major groups. and psychological experiments group for the fMRI experiments using ACT-R. since the contribution to robotics was not as significant. navigation using a mobile robot. since useful behavior in various situations is declared as a goal of many cognitive architectures. in these cases we placed the project in the dominant group. although the goal of the researchers may have been different. Such grouping of projects emphasizes the application aspect of each project. However. e. modeling the impact of communication tasks on the battlefield awareness [230]. Similarly. MIDAS and IMPRINT. video games. Human-Robot Interaction (HRI) HRI is a multidisciplinary field studying various aspects of communication between people and robots. decision making in the AAW domain [231]. conditioning or attentional blindness) performed by the cognitive architecture and compared against human data. psychological experiments. which are shown in Figure 11. is placed in the robotics group. This type of modeling has been used extensively for military applications. operators of the nuclear power plant [226]. assistive or developmental robotics. namely human performance modeling (HPM). HPM is concerned with building models of aircraft crews [225]. The category of games and puzzles includes applications to playing board games. n-back task. people performing other complex tasks. The need for such models comes from engineering domains where the space of design possibilities is too large. COGNET.g. games and puzzles. For example. Soar has been used to play board games with a robotic arm [227]. 911 dispatch operator HPM is dominated by a handful of specialized architectures. Soar was used to implement a pilot model for large-scale distributed military simulations (TacAir-Soar) [234]. which is relevant to robotics and games and puzzles. Most of these interactions occur in the context of social. aircraft taxi errors [233]. Most NLP applications are concerned with understanding and answering questions given as spoken or typed commands and are related to social robotics. 8 Applications of Cognitive Architectures Most of the cognitive architectures reviewed in this paper are research tools and few are developed outside of academia. regardless of whether it was achieved by reasoning or was done as a demonstration of a learning algorithm.

circos. A list of applications for each cognitive architecture with relevant references is included in supplementary materials and can also be viewed in the interactive version of this diagram. Every application is a particular project completed using a cognitive architecture supported by a relevant publication. Projects with partial results or mock-ups are not included. Although none of the systems presented in this survey are yet capable of full software or video demonstration. . they allow for some level of supervisory control ranging from single vowels signifying direction of movement for a robot (SASE [236]) to natural language instruction Figure 11 Diagram showing practical applications of cognitive architectures. Also examples involving a particular algorithm isolated from the rest of the architecture are not included. Applications are divided into groups representing broad areas. interactions extend from direct control (teleoperation) to full autonomy of the robot enabling peer-to-peer collaboration. Visualization is created with http://www. emergent (red) and hybrid (blue). Architectures are divided into three groups: symbolic (green).the level of autonomy demonstrated by the robot.

NARS [244]. Games and Puzzles The category of games and puzzles includes applications like playing board games. vowel recognition (Peterson and Barney dataset [252]). which focus not only on the scores and efficiency. such as hand-written character recognition (HTM [259]. Applications in this group are almost entirely implemented by emergent architectures. HOMER [88]. By far the most popular is Unreal Tournament 2004 (UT2004). odor identification [253]. image classification benchmarks (HTM [261]. some architectures contributed to the NLP research. Besides. in particular. However. The HTM architecture is geared towards the analysis of time series data. SASE [248]) and learning English passive voice (NARS [244]). call. Categorization and Clustering Categorization. classification.g. texture classification benchmarks (ART [264]). invariant object recognition (Leabra [265]). these are mainly standalone examples. etc. FORR [266]). etc. video games and problem solving in limited domains. diagnosis of the failures in a telecommunications network (PRS [257]) and document categorization based on the information about authors and citations (OpenCogPrime [258]). speech recognition benchmarks (Sigma [246]. predicting taxi passenger demand [254] and recognition of cell phone usage type (email. medical diagnosis (Pima-Indian diabetes dataset [250]).) based on the pressed key pattern [255].com). for which there is an open-source software toolkit Pogamut [267] making it easier to create intelligent virtual characters. Soar [227]. Video games are also used as virtual domains for cognitive architectures. In the context of cognitive architectures. A few other examples from non-emergent architectures include gesture recognition from tracking suits (Ymir [256]). Computer Vision Emergent cognitive architectures are also widely applied to solving typical computer vision problems. [247]. fault diagnostics (pneumatic system analysis [251]). Natural Language Processing (NLP) The applications in this group are concerned with understanding written or spoken language. Specific examples are anaphora resolution (Polyscheme [243]. Kismet [240]). The ART networks. iCub [238]). the eight puzzle and the five puzzle are frequently used to demonstrate knowledge transfer (e. The computer vision applications that are part of more involved tasks. Some architectures also target non-verbal aspects of HRI. pattern recognition and clustering are common ways of extracting general information from large datasets. are discussed in the relevant sections. It is usually assumed that a command is of particular form and uses a limited vocabulary. Although it is common for cognitive architectures to use off-the-shelf software for speech recognition and text parsing. for example natural turn-taking in a dialogue (Ymir [239]. [260]). these methods are useful for processing noisy sensory data. for example for navigation in robotics. monitoring stocks (numenta. view-invariant letter recognition (ART [263]). [262]). but also take into account the believability . DIARC [245]). which are used as sophisticated neural networks. Soar [237]. changing facial expression (Kismet [241]) or turning towards the caregiver (MACsi [242]).g.(e. The simple board games with conceptual overlap like tic-tac-toe. such as predicting IT failures (grokstream. have been applied to classification problems in a wide range of domains: movie recommendations (Netflix dataset [249]). such as ART and HTM. there are several game playing etc.

While reaching is a relatively easy problem and many architectures implement some form of arm control. to simplify recognition (Soar [279]). [270]. and sensors for navigation. etc. and the automated stamp distribution center for the US Postal Service [287]. this is taken as an indication that a given cognitive architecture can imitate human reasoning. The fetch and carry tasks were very popular in the early days of robotics research as an effective demonstration of robot abilities. Most of the experiments are conducted in simulated environments. it is also a recommended middleware for the BotPrize contest and is used with modifications by other groups to implement the UT2004 bots ([268]. autonomous cleaning and deburring workstation [286]. bridge construction [285]. Sometimes markers are used. such as learning by instruction. Although Pogamut in itself implements many cognitive functions. The cognitive model can then be either used for making predictions about behaviors in different situations or analyzed further and used as an explanation for the psychological mechanisms behind known phenomena. Complexity of grasping depends on many factors including the type of gripper and properties of the object. these robots were able to find the objects of interest in unknown environments.of the agent (2K BotPrize Contest7). 7 . It is important to note that visual search in these cases is usually a part of a more involved task. [272]). Other video games are Freeciv (REM [273]). [282]. Here the goal is either to demonstrate that a cognitive architecture can numerically model human data or give reasonable explanations for existing psychological phenomena. such as printed barcodes attached to the object. Some well-known examples include a trash collecting mobile robot (3T [277]) and a soda can collecting robot (Subsumption [278]). a model of perceptual categorization on DarwinVII robot [181]). Industrial applications are represented by a single architecture . gripping is more challenging even in a simulated environment. Atari Frogger II (Soar [274]). the environments are more realistic (e. browser games (STAR [276]) and custom made games. Robotics There are numerous applications of cognitive architectures in robotics. Navigation and obstacle avoidance are basic behaviors. One workaround is to experiment with grasping on soft objects. boxes and a ruler [283]. fMRI and EEG experiments using cognitive architectures. although there are some examples implemented on a physical robot (e. Psychological Experiments The psychological experiments group includes replications of a number of psychophysiological. experiments involving visual search are done in very controlled environments and preference is given to objects with bright colors or recognizable shapes to minimize visual processing. tutor. Through a combination of simple vision techniques such as edge detection and template matching. such as plush toys (ISAC [281]). Other applications include robotic salesman.g. [271]. [271]. More recent work involves objects with different grasping types (objects with handles located on top or on a side) demonstrated on a robot controlled by DIARC [282]. the robot controlled by CoSy finds a book on a cluttered shelf using a combination of sensors and SIFT features [280]). Another example is iCub adapting its grasp to cans of different sizes. If data produced by a simulation matches the human data in some or most aspects. More recent cognitive architectures tend to solve search and object manipulation tasks separately.4D-RCS. which has been used for teleoperated robotic crane operation [284]. Object manipulation involves arm control to reach and grasp an object. [269]. When visual search and localization is the end goal. for example in assistive robotics. which can be useful on their own or used as part of more complex behaviors.g. medical robots. Infinite Mario (Soar [275]). Typically.

or in learning scenarios. However. For instance. peacekeeping mission training (MAMID [190]). . symbolic architectures are widely used for planning and reasoning. For instance. In particular. Virtual Agents Simulations and virtual reality are frequently used as an alternative to physical embodiment. for example. Some exceptions exist. Simulations are also common for modeling behaviors of intelligent agents in civil applications. While the importance of vision and perception in general is universally acknowledged. On the other hand. Thus. As expected. More biologically realistic models were developed in parallel but became widely recognized as a viable alternative only relatively recently. emergent architectures are mainly applied to clustering and vision tasks. We also try to answer the question of what are the practical capabilities of the existing architectures by categorizing their demonstrated applications. The visualization in Figure 11 highlights the specialization areas of different types of architectures. in the military domain. such as playing fetch with a virtual dog (Novamente [291]). their treatment by architectures remains rather superficial. It is apparent that biologically inspired models cannot demonstrate the same range and efficiency in practical applications compared to the less theoretically restricted systems based on heuristics and engineering. For example. similarly to the traditional cognitive architectures. we described common approaches to implementing important aspects of human cognition. Another rising paradigm is represented by machine learning methods. action selection in robotics is done using methods ranging from priority queue to reinforcement learning [292]. which have found enormous practical success. with several applications in the robotics domain. in the social interaction context (ARS/SiMA [290]). they are mainly concerned with perception although lately there have been attempts at implementing more general inference and memory mechanisms with deep learning techniques. hybrid architectures are more uniformly represented across all application categories. simulations model behavior of soldiers in dangerous situations without risking their lives. simulations are frequently used to simplify or replace vision. Biological systems are mostly limited to controlled domains and many of their demonstrated results are proof-of-concept. such as perception. human performance modeling and psychological experiments. Overall. philosophy and neuroscience). With respect to cognitive functions. One of the advantages of virtual environments is that they can provide useful information about the state of the agent at any point in time. efficient data processing and storage. however their support for inference and general reasoning is inadequate for solving common AI problems. which may lead to interesting applications in the future. we have identified some general trends in the development of the field. planning and general reasoning. On the other hand. Some of these models represent the emergent paradigm. This is useful for studying the effect of emotions on actions. Realistic unstructured environments are also rather rare. Despite the differences in theory and terminology they tackled the same issues of action selection. Some examples include modeling agents in a suicide bomber scenario (CoJACK [73]). Our data on practical applications of cognitive architectures demonstrates that with few exceptions architectures are narrowly focused on a particular area. hybrid models combining both symbolic and subsymbolic approaches are currently the most promising and will likely continue to be popular in the future. psychology. there is a trend toward implementing multimodal perception. 9 Discussion The main contribution of this survey is in gathering and summarizing information on a large number of cognitive architectures from various backgrounds (computer science. research in cognitive architectures is focused more on higher level abilities such as learning. command and control in complex and urban terrain (R- CAST [288]) and tank battle simulation (CoJACK [289]). there is a significant gap between general research in robotics and computer vision and research in these areas within the cognitive architectures domain. Finally. For example. Historically psychology and computer science were an inspiration for the first cognitive and agent architectures. attention and memory.

2003. Arel. Bach. [12] B. Zhao. K. Brain Sci. 103–121. pp. pp.” in Computational and Cognitive Neuroscience of Vision. and M. [10] U. “Incremental Object Perception in an Attention-Driven Cognitive Architecture. Yu. Faghihi and S. “Toward a unified catalog of implemented cognitive architectures. no.” in Proceedings of the International Conference on Artificial General Intelligence. 2012. “Physical symbol systems. 1. 587–601. Hall. [5] A. 2. Schlesinger. 25–42.or neuro-inspired architectures. vol. pp. Intell. [2] S. [11] S. vol. 2014. no. pp.” in International Conference on Artificial General Intelligence. 2012. Artificial Intelligence: A Modern Approach. References [1] A. Russell and P. Scheutz. 195–244. 4. Goertzel and G. B. Found. there is a clear trend towards developing hybrid cognitive architectures that take advantage of both biologically inspired and engineering techniques. On the other hand. Sci. F. ACT-R. 5.. and J. 2015..” Theor.” Proc. A. pp. 160–169. Sci. “Précis of Unified theories of cognition. 2011. [4] A. Sun. Cogn. HTM and Pogamut. Wendt.. “The Newell Test for a theory of cognition. 8 http://numenta. Kollmann. Meet. 330–341. as we have mentioned already. 17. working memory. This is reminiscent of the terminological differences that existed between the architectures from cognitive and engineering backgrounds. [6] J. Adams. Brain Sci. at present.. Samsonovich.. “A cognitive API and its application to AGI intelligence assessment. attract a large community of researchers and are frequently referenced in the publications outside the main developing group. Schaat. Ed. [7] S. 3. Bello. Gen. 341–373. V. Acknowledgements This research was supported through grants to the senior author. 1992. Furlan. Newell. [14] J. are far more commonplace. Metzler and K.” Behav. Wang. 1995. 33. F. pp. 425–492. Soar. Soc. for which all the authors are grateful: Air Force Office of Scientific Research (FA9550-14-1-0393). Lebiere. Psychol. Sowa. 15. R.” in EuroAsianPacific Joint Conference on Cognitive Science. 242–245. Anderson and C. vol. Samsonovich. “Interdisciplinary Development and Evaluation of Cognitive Architectures Exemplified with the SiMA Approach. Bridewell and P.” in Proceedings of the 18th International Conference on Engineering Design. S. pp. M. Coop.. 37th Annu. [9] P. 26. Gelbard. For instance. J. vol. 2010. 4. 2013. Goertzel. Shapiro. pp. With respect to the field as a whole. S. C. S. Tsotsos.for example Grok8 . “Natural language processing by reasoning and learning.” Cogn. Newell. [3] R. [13] W. and the Natural Sciences and Engineering Research Council of Canada (RGPIN-2016-05352). although there are numerous advantages of open-source development. “Attention and Cognition: Principles to Guide Modeling. Another factor impeding communication is due to the sometimes impenetrable language used by the biologically. vol. Norvig. “The LIDA Model as a Foundational Architecture for AGI. 279–284. pp. attention. etc. M.” AI Mag. [8] T. terms such as saliency. Franklin. A.” Behav. “Taxonomy of Cognitive Functions.a commercial application for IT analytics based on the biologically inspired HTM architecture. Prentice Hall.. Artif. Some of this is due to the fact that many architectures are developed as closed-source projects in small groups. 1980. Q. Shea. there is far less collaboration than would be expected in an interdisciplinary area such as cognitive architectures research. J. R. so the performance gap may be reduced in the future. R.” Philos. no. “Mapping the Landscape of Human-Level Artificial General Intelligence. pp. Jakubec. “Desiderata for cognitive architectures. 2004. although. 2015.” in In Proceeding of the 2010 conference on biologically inspired cognitive architectures.. Elsevier. cognitive architectures such as ART. vol. no. the Canada Research Chairs Program (950-219525). .

[34] J. 122–136. 259–278. vol.08130v1. “Cognitive Architectures and Autonomy: A Comparative Review. Graves. Butt. 529. Veloso. J. vol. and S. pp. C. E. Franklin. M. Helgasson. M. Antonoglou. 2009. Lect. 484–489. Bellemare.” in Proceedings of AAAI Spring Symposium on Knowledge Representation and Ontology for Autonomous Systems. 171. Mazhar. A. 2012. 141–160. “PRODIGY4. M. 2012. 101–142. 2012. and K. Sheikh. 10. C. N. Rui.” Int. Pennachin. 1999. R. 2014.. Sci. “Generative Music. S. Beattie. “Brief Survey of Cognitive Architectures. and D. 50–70. 2016.” in Frontiers in Artificial Intelligence and Applications. M. Petersen. Thagard. Kavukcuoglu. Lillicrap. 2015. “Cognitive architectures. Gil.. C. 1992.01526. G. B. Z. UUCS-13-004. Khattak. . Ikle. [38] A. 2007. pp. A. arXiv1507. Ramsay.. Etzioni.” in AISB Quarterly.” J. 109–127. no. and G. Mnih. and O. Notes Comput.[15] U. pp. Pitt. Rogers. Joshi. “ARDIS: Knowledge-based dynamic architecture for real-time surface visual inspection. 2005. and Computer-Assisted Composition in MusiCog and ManuScore. W. Joseph. Garcia-Alegre. vol. and M.” Cogn. E. “Human-level control through deep reinforcement learning. Res. Using Context. M. J. J. G. [32] D. Dieleman. Metta. Sadik. pp. Cognitive Modelling. Veness. pp. 7540. Alicia. “Cyc.” PhD Thesis. Langley. 1–10.” IEEE Trans. C. Blythe. “Cognitive Architectures: Where do we go from here?. J. and D. Laird. no. Guez. 2009. 2011. V. D. 5601. D. Schrittwieser. Calvi. [25] A. Laird. Atlantis Press. vol. Langley. S. Goertzel. [33] V. Kahn. A. Huang.” Lect. “The Soar Cognitive Architecture. Geisweiller. Knoblock. [19] D. [35] N. J. Silver. 2. [17] S.” Nature. Wang. 2013. 8–13. “The Soar of cognitive architectures. I. pp. Eds. 11. [27] K. A. Van Den Driessche. vol. M. J. Pasquier. pp. pp. and S. Asselman. 171. N. Aammou. and X. Leach. Duch. Panneershelvam. 518. [23] P.” in The Cambridge handbook of cognitive science. “Metaknowledge for Autonomous Systems. Maddison. Intell. J. and S. 2010. J. I. Syst. 395–404. King. Poirier. and K. Rep. Antonoglou. Kalchbrenner. 2. 2015. [30] J. Res. Hassabis. A. no. Kalchbrenner. “Grid Long Short-Term Memory. D. “A Survye of Artificial Cognitive Systems: Implictions for the Autonomous Development of Mental Capbilities in Computational Agents.” arXiv : 1511. Joulin. [16] T. Gen. Syst. Notes Artif. Minton. no. 3. and M.” Appl. Sci. Evol.” Int. Butt. Goertzel. L. I. Res. pp. Wang. Danihelka.-E. vol. pp. 2008. pp. Fidjeland. 134.. “Dynamic Computation and Context Effects in the Hybrid Architecture AKIRA. Nasseh. Cambridge: Cambridge University Press. Notes Bioinformatics). no. 7585. [40] R. Graves. vol. Comput. Oentaryo. M. P. Eds. “Emotional Cognitive Architectures. 2013. Maxwell. and L. Rusu. IOS Press. vol. [21] W. Kumaran. Vernon. pp. and A. G. Grewe.. S. “Cognitive architectures: Research issues and challenges. O. Sutskever. D. [36] T. Intell. 9. and E. 1–30. [24] S. Mnih. 2009. pp. Kavukcuoglu. Lect. A.. Intell. (including Subser.” in Engineering General Intelligence. B. “Glocal memory: A critical design principle for artificial brains and minds. 141–160. D. vol. T. G. R. and J. A. 10. Thórisson and H. 74. 2014. 2015. Foxvog. [29] B. Nham. [18] J. 2. Interdiscip. C. C. Larue. “Cognitive architectures: Research issues and challenges. [26] G. [20] J. [22] P. S. Carbonell. “Comparative Study of Cognitive Architectures. Model. Ostrovski. Artif. 224–235.. S. 2014.” arXiv Prepr.” Neurocomputing. Pennachin. Profanter. “Multiple Object Recognition with Visual Attention. Legg. 2015. “Mastering the game of Go with deep neural networks and tree search. “The Cognitive Symmetry Engine. S. [37] D. Merrill. Ba. 529–533. Rep. Kavukcuoglu. I. 1–3. A. Epstein.” in International Conference on Affective Computing and Intelligent Interaction. Goertzel. A. Comput. 2004. pp. CTIT 2013. 1–30. Peterson. Faghihi. Henderson and A. “Cognitive Architectures.” Iclr. P. “A Hybrid Architecture for Situated Learning of Reactive Sequential Decision Making. M. Pezzulo and G..” in Proceedings of the 2013 International Conference on Current Trends in Information Technology. T.” Nature. C. Reilly. C. L. J. pp.” Cogn. and A. Sandini. P. vol.” Tech. E. J. pp. Y.” in HaupteSeminar Human Robot Interaction. p. Part 1. Lanctot. K.” Tech. H. Silver. 2. Frankish and W. A.” 2010. K. Conf. Sun. Sifre. [28] D. [31] P. J. Martin. 14.0: TheManual and Tutorial. and N. “A Roadmap towards Machine Intelligence. Baroni. no. Rincon. Wierstra. Laird. G. no. Riedmiller. 2012. Guinea. A. [39] B. no. Mikolov. vol. 84–94. V. Rogers.

pp. L. Simul. Calderita. Wyatte. D. [55] D. Smeraldi. no. [42] R. Eds. C. MA: MIT Press. 2002. 750–755. Scheutz. [59] T. C.” in 1st Joint Emergency Preparedness and Response/Robotic and Remote Systems Topical Meeting. Bandera. Dautenhahn and C. Maffei. and M. Thalgott.. pp. Metta. 1526–1532. vol.. Jilk. “Circos: an Information Aesthetic for Comparative Genomics. Ivaldi. B.” in Imitation in Animals and Artifacts. Bernard. pp. J. Matthies. vol. Cambridge. L..” in Proceedings of AAAI 2006 Robot Workshop. “Stereo vision-based navigation for autonomous surface vessels. vol. Chetouani. P. R. [51] A. “Artificial intelligence: Connectionist and symbolic approaches. Manso. pp. 2013. Lyons. pp. From unified to hybrid approaches. Breazeal and B. and D. 2006. T. B. [49] M. Birol. 9. A. [58] D. “Online multiple instance learning applied to hand detection in a humanoid robot.[41] S. Ciliberto. Sun. Bustos. 2001. Sun. Romero-Garces. Conf. [64] C. M. Herd. Anderson. G. pp. and P. J. 566–572. [56] G. 497–505. 2015. Barbera. A. Factors. Miller and M. “A computational model of Tower of Hanoi problem solving. and G. Sigaud. McRaven. “Hybrid Connectionist-Symbolic Modules. J. 1996. Mingus. E. pp. Filliat. Aghazarian. J. J. Bustos. Jilk. Manso. Behav. Natale.” AI Mag. 783–789.” Int. Scheutz. [57] J. Martinez-Gomez. S. D. and O. A. Prieto. and A. and R. L.” Hum. 363– 390. and G. [65] R. J. Marfil. O. 2002. and D. Pergamon/Elsevier. Scassellati. O’Reilly. Snook. Morrow. “The cognitive architecture of a robotic salesman. Jones. 2009. Albus and A. D. 2006. “A neural approach to adaptive behavior and multi-sensor action selection in a mobile device. 2006. adaptive.” Connect. V. Herd. H. Integr. L. Rohrer.” SPIE Defense. Sánchez-Fibla. J.” in Proceedings of the ninth National conference on Artificial intelligence AAAI. no. V. P. vol. Duro. Funk. A. 50. no. R. [45] M. [60] M. Rothganger. Bandera. Vis. Howard. Kramer. R.” in Proceedings of the 2002 IEEE International Conference on Robotics and Automation. pp. 72. Robot. Krzywinski. “A cognitive approach to vision for a mobile robot. [44] R. Krichmar and J. “Stereo vision for planetary rovers: Stochastic modeling to near real-time implementation. Anzalone. 2011. 3. J. [63] J. Sens. “Perception and human interaction for developmental learning of objects and affordances. 248–254. S. no. J.” in Proceedings of the 5th International Conference on Natural Computation. Verschure. 1991. Cserey. Marra. “Toward Social Cognition in Robotics: Extracting and Internalizing Meaning from Perception. Schermerhorn. A. “An embodied biologically constrained model of foraging: From classical and operant conditioning to adaptive real-world behavior in DAC-X. pp. “Recurrent processing during object recognition. [50] L. S. [54] F. 2006. F. vol. F. M. Psychol. D. 1. J. 4. 19.. Bandera. and M. 1992. 3–18.” in Workshop of Physical Agents. Wyatte. 17. 2004. J. Droniou. ICNC 2009. M. Slack. 1347–1352. pp. Martinez-Gomez. [62] C. 2011. and D. 99–103. Nehaniv. Marcos. Romero-Garcés. 2015.” J. 1998. S. bimodal people tracking for indoor environments.” in Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). Kokinov. Gray. J. Santos-Pata. Dingler. 2013. 2014. A. pp. F.” Neural Networks. Xavier. and D. [46] W. Faiña. D. reliable. [61] P. 1639–1645. no. 1997. “Recurrent processing during object . Lyubova. M.” Int. vol. [52] S. Adapt. Artif. no. and P. Brick. J.. Benjamin. 8. Secur. J. D. vol.. Spanish Assoc. pp. L.. P. vol. 2..” in IProceedings EEE International Conference on Intelligent Robots and Systems.” in IEEE- RAS International Conference on Humanoid Robots. J. Calderita. Marfil. L. 28. 2012. “Brain-Like Artificial Intelligence: Analysis of a Promising Field. [48] J.” PhD Thesis. 2008. M. N. “Adaptive Learning Application of the MDB Evolutionary Cognitive Architecture in Physical Agents. [53] B. Connors. Comput. A. Bandera. [43] B. 1972–1973. J. “Cognitive architectures: choreographing the dance of mental operations with the task environment. “Micro-Level Hybridization in the Cognitive Architecture DUAL. 6. C. Trotz. P.. and P. pp. Intell. Varma. 88–108. no. Gascoyne. Schein. D. P. “DIARC: A Testbed for Natural Human-Robot Interactions. D. K. “Intelligent control and tactical behavior development: A long term NIST partnership with the army.” Genome Res. I. F. A.” 2013. 15. “Challenges in Building Robots That Imitate People. pp. “Model-free learning and control in a mobile robot. 2009. Mingus.” International Encyclopedia of the Social and Behavioral Sciences. Huntsberger. A. 1.” Front. Gerardeaux-Viret. C.” Conf. O’Reilly. 71–91. “Fast. Bellas. [47] R. “Global symbolic maps from local navigation.

E. 397–400.. [72] B. pp. Simpson. 1. p. Edelman. N.A New Paradigm. 636–640. Brenner. Robots. 2009. Geissweiller. L. 2014. [83] V. J. 3. no. vol. “A cognitive architectural account of two-channel speech processing. D. M. G. K. and A. no. and J. 459. L. Chikhaoui. [81] J. 2005. Thompson. M. Calvert. Wilkes.-J. Zhang. vol. K. 2012.” Front. Heljakka. Sridharan. Beaudoin. Trafton. Sanders. D.” Front.” IEEE Trans. R. September. J. [84] C. “Enabling Effective Human – Robot Interaction Using Perspective-Taking in Robots. Applied to Virtual Animals in Second Life. Wakefield. and M. Brock.” in International Conference on Artificial General Intelligence. and C. “Why Neurons Have Thousands of Synapses.. pp.” Comput. M. “Planning in OSCAR. pp. arXiv1602. Looks. vol.” Minds Mach. [68] N. Seth. C. 377–386. a Theory of Sequence Memory in Neocortex. E. [77] B. Cerny.-J. L. Artif. Appl. vol. “Workload Analysis of the Crew of the Abrams V2 SEP: Phase I Baseline IMPRINT Model. “Increasing realism of human agents by modeling individual . Jacobsson. Sloman. 4–5.” Auton. H. A. Pedrotti. S. Bickmore. pp. A. G. McKinstry. Senna. E. vol. pp. [66] D. 2013. Skocaj.. . 2016.” Encyclopedia of Machine Learning and Data Mining.” in Proceedings of the 18th Conference on Behavior Representation in Modeling and Simulation. Humans. Gemrot. O’Neill. Mitchell. R. Bida. Ahmad. pp. Wilson. 6. Syst. ARL-TR-5028. 1990. [85] K. Goertzel. 2008. A. Mintz. Bellefeuille. Alford.” in AAAI Fall Symposium: Advances in Cognitive Systems. and J. Carpenter and S. Mingus. F. K. [79] N. [86] J. Cybern. and Z. Kopicki. [70] G. 2002. 10. Hawkins and S. recognition. N. G. Schultz. 2009. 2002. 1989.” in Proceedings 2nd International Conference on Development and Learning. Evertsz. Kuokka. Weng. “The Role of Knowledge and Certainty in Understanding for Dialogue. vol. Goertzel. M. Arnold. Grossberg. “Salience-driven Contextual Priming of Speech Recognition for Human-Robot Interaction. K. Vere and T. Laudares. 1993.” in Proceedings of the Second International Workshop on Epigenetic Robotics Modeling Cognitive Development in Robotic Systems. J. R. 13. 13–20. N. “Visual binding through reentrant connectivity and dynamic synchronization in a brain-based device. 11. G. 2004. no.” arXiv Prepr. “ISAC Humanoid: An Architecture for Learning and Emotion. Intell. Neural Circuits. and W. 1. Springer-Verlag. Intell. Brom. “Populating VBS2 with Realistic Virtual Actors. “An Integrative Methodology for Teaching Embodied Non-Linguistic Agents. Zhang. [82] R.” in Proceedings of the NASA Conference on Space Telerobotics. “A basic agent. M. 94. no. “Integrating deep learning based perception with probabilistic logic via frequent pattern mining. Ritter. C. [90] E. pp. Ligorio. Psychol. vol. 1999. 2016. S. T. H. no. B. [71] J. N. vol. Peters. pp. no. Pigot. P. Passonneau. 2008. I.” in Language. Psychol. L. D. R. Breazeal and L. 2010. “A developing sensory mapping for robots. T. P. pp. and T. Krichmar. T. Sarathy. Zacharias. 2. Intell. Int. Kawamura. 4. Kieras. “Recognition of affective communicative intent in robot-directed speech. [89] D.. G. J. L. “Developmental Robots .” in Proceedings of the Human Factors and Ergonomics Society. Kruijff. vol. pp. 83–104. Iyer.” in Proceedings of the IEEE-RAS International Conference on Humanoid Robots. and M.. Man. J. M. 405–410.Part A Syst. 2002.” in International Conference on Entertainment Computing. E. Cortex. [74] B. Lison and G. 161–175. J. Pennachin. 1–8. vol. Cassimatis. and F. pp. [69] J. and J. Pratte. 14. Thorisson. 460–470. [76] D. M.. [73] R. 2001. Lopes. Busetta. 35. M. Majer. A. and Learning. 1185–1199. G. Hambuchen. 812–816. Comput. Vrecko. 171. Scheutz. Acar. M. and F. Bugajska. Psotka. 12. L. Hongeng. A.” Front. and R. “Evolution of GameBots project. Conf. A. Hudlicka. H. Rogers. pp. 2012. 2009. 2012. W.” Proc. H. “Learning a Song: an ACT-R Model. [67] A. [75] M. Zhang. [78] P. and C. [80] S. Wyatt. no. vol. 2013.” Tehcnical Rep. [87] D. Zillich. 163–174. “Integrating Planning. D. M. A. Execution. Kruijff. “Enabling Basic Normative HRI in a Cognitive Robotic Architecture. 2014.” Front. Mind model for multimodal communicative creatures and humanoids. Aryananda. and B.” in Cognitive Systems. 1. “Adaptive Resonance Theory. Hawes. pp. ICDL 2002. Silva. Gordon. 41–60. Pollock. Weng and Y. “The role of competitive inhibition and top-down feedback in binding during object recognition. A. 4.03814. pp. Wyatte. [88] S. 113–144. O’Reilly. D. Herd. “The Playmate System. Epstein.” Cereb. P. E.

Intell.” in Proceedings of the Human Factors and Ergonomics Society 42nd Annual Meeting. and S. [94] G. D. Bridewell. Hong. Notes Bioinformatics). Intell. Auton.” in Proceedings of the IEEE Military Communications Conference MILCOM. Koch. Balakirsky. Pattern Anal. E. Breazeal and B. dynamic environments. Y. Learning a Mixture of Search Heuristics. 1998. 56–72. “A context-dependent attention system for a social robot. [92] J. M.. 2014. pp. and P. Oudeyer. Droniou. Ser. J. 211– 216. F. pp. O. N. 1254–1259. Lacaze. [108] D. Jt. no. 1.. 1894–1901. Arrabales. Itti. 2013. Simul.Attentive Processes in Multiple Object Tracking: A Computational Investigation Modeling Object Construction and Tracking. R. Ment. [103] C. M. Mach. 5. I Badia. “Pre-attentive and attentive vision module. Control. and S. Rev. and C. [109] M. “Simulating human performance in complex. (including Subser. Bernardino. 1. Yen. Phys. Deutsch. Gore. Michaloski. A. pp. 1994. (Ny). [110] P. Lect. Dagalakis. O’Neill. Hui-Min Huang. 2009. 20. M. J. S. [102] E. A. 2010. Mathews. N. and J. 2000. AL/HR-TR-1996-0161. “Object learning through active exploration. 2005. [105] Z. Conf. 2. and J. F. A. [99] J. Sigaud. J. Rep. F.” Annu. 2016. 1998. [96] S. Legowik. 1146–1151. pp. Lane. pp. and R. 1999. A. Rippey. W.” Simulating Hum.” Annu. and A. G. Hornstein. no. 62.” Cogn. Scott-Nash. A. Camer. “Supporting the Commander’s information requirements: Automated support for battle drill processes using R-CAST. L. W. Barbera. no. vol. “Operability Model Architecture: Demonstration Final Report. pp. J. [106] S. anticipation. C. “A Model of Eye Movements and Visual Attention. G. Springer Berlin Heidelberg. Wickens. J. [104] L.. S. S. [113] P. K. [97] J. Albus. 1. H. B. Epstein and S. H. Artif.” IEEE Trans. E. 565–588. B. vol.” Proc.0. vol. 29. K. Nguyen. Am. Reynolds.. Wasylyshyn. Salvucci. Neural Networks. M. 2011. C. [100] B. Intell. pp. “4D/RCS: A Reference Model Architecture For Unmanned Vehicle Systems Version 2. Notes Comput. Pfeifer. L. Res. “PASAR: An integrated model of prediction. T. 2015. T. Turk-Browne. 87–99. architecture. Third Int.” in International Conference on Digital Human Modeling. Conforth and Y. Hooey.” Neural Networks. Gobet. Model. 252–259. L. Nyamsuren and N. Ruesch. “A Model of Saliency-Based Visual Attention for Rapid Scene Analysis. P. Cogn. D. no. vol. “Multimodal saliency- based bottom-up attention a framework for the humanoid robot iCub. Lavin and S. Juberts.. Niebur. “A Taxonomy of External and Internal Attention. 2012. Fall 2000 Symp. and E. pp. Carpenter. M. Lyubova. Chopra. Golomb. B. Perrin. M. Kramer. pp. L. Ivaldi. M. no. Conf. 1–19. 0 A revised model of visual search. differences: Methodology. August 2002. vol. [107] J. 2000. “Attention mechanisms in the CHREST cognitive architecture. Sci.” PhD Thesis. W. Chun. Shneier. P. Assoc. Grossberg. A. F. D. S. AerospaceDefense Sens. 6. 73–101. Wolfe.” Inf.” in Proceedings of the 38th Annual Meeting of the Cognitive Science Society. Syst. Deutsch and N. Freed. Lect. R. Psychol. 2011. M. Agents. Ledezma. SPIE 16th Annu.” in Proceedings of the14th International Conference on Machine Learning and Applications (ICMLA).. “Omar Human Performance Modeling in a Decision Support Experiment. no. pp. vol. J. Santos-Victor.” Tech. Scott. 11.” J. V. Petrovic. pp. Messina. Stouffer. June. 2009.” in Proceedings of the International Joint Conference on Artificial Intelligence.” vol. Rev. 2002. and testbed. [95] M. Sci. pp. “Guided Search 2 . [101] J. 1991. 1998. 1. 4. pp. and S.” Lect. From. and R. 3. Scassellati. “Attentive and Pre . Agents. Lopes. Taatgen. 186. and N. H. [98] S. C. Shackleford. “RCS: A cognitive architecture for intelligent multi-agent systems. J. pp.” Proc. Dev. Proctor. 962–967. A. a. “ARTMAP: Supervised real-time learning and classification of nonstationary data by a self-organizing neural network. 53–59. pp. “CHARISMA: A Context Hierarchy-based cognitive architecture for self- motivated social agents. 1.” in Proceedings of the IEEE International Conference on Robotics and Automation. sensation. Ahmad. S. 2008. no. Bello.the Numenta Anomaly Benchmark. Filliat. Padois. Wavering. 1997. S. Murphy. Macmillan. Sanchis. Smith. [93] A. Verschure.. Int. B. J. no.” Proc. “A computational implementation of a human attention guiding mechanism in MIDAS v5. 53–63. N. attention and response for artificial sensorimotor systems. “A cognitive approach to multimodal attention. 5395 . [112] M. A. L. 202–238. vol. a. and O.. Meng. [91] S. P. Int. no. Cramer. [111] R. Control. vol. Symp. D. 2012. R. Albus and A. “Evaluating Real-time Anomaly Detection Algorithms -. D.” EEE Trans. Notes Artif. 1232–1236.

” in Proceedings of the AAAI Fall Symposium on Parallel Cognition. 89–101. NY: Simon & Shuster. Prasad. vol. Fan. no. 2011. Baars. Softw. no. no. C. J. Eng. Dev. 4. Syst. P. Intell. 2005. Syst...” Front. Baddeley and G. and M.. Theory. Theor. Lebiere.. Biefeld.” Psychol. J. Baars and S. 2002. K. 634–653. D. Lalanda. Sun and S. vol. [124] R. [132] U. Technol. Nkambou. 2006. 2. [126] P.” Rob. Brain Res. no. “Working Memory. vol. vol.” Int. Auton. [120] P. B. 47–89. pp.. Yen. 2009. pp. pp. [133] B. Santarelli. Choi. pp. L. K. [114] J. Novianto. “LIDA: A systems-level architecture for cognition. CA. & D. Balabanovic. Conf. Allender.” J. [131] R. Artif. R.” in IEEE International Conference on . 2014. vol. 1–2. 2009. 7. and learning. Le. 19–41. Conscious. no. 1989. [127] E. Poirier. Franklin. pp. Support Syst. 25. and J. [128] J. Learn. Dumer. Faghihi. “Human Memory: A Proposed System and its Control Processes. R. Douglass. Res.” Int. Res. P. “A Domain-Specific Software Architecture for Adaptive Intelligent Systems. [122] B. 3. “Simulating Visual Qualia in the CERA-CRANIUM Cognitive Architecture. Brom. 1.. vol. Springer New York. no. 1990. [129] W. Tsotsos. LNAI. and J. 255–281. J. 3–15. “Learning recursive control programs from problem solving. Madl. and A. 1986. vol. Springer. vol.” 2000. Ser. 2011. Psychol. pp. no. 2000. Horswill. “Elephants don’t play chess. [139] S. “Cognitive programs: Software for attention’s executive.” IEEE Trans.. 1968.” in From Brains to Systems. Artif. [115] J. 5. L. 493–518. “Human-like learning in a conscious agent. The Society of Mind. Simul. (Carnegie M. pp. E. K. and J. 2007. pp. Mach. A. Minsky. Ingrand. and M. Johnston.” Proc. “Decision-Making in an Embedded Reasoning System. [142] A. pp. 2002 Adv. Exp. Stokes. [123] C. “Integrating planning and reaction: A preliminary report. Psychologically realistic cognitive agents: taking human cognition seriously. Fournier-Viger.” in International Conference on Virtual Storytelling. “Action primitives for bionics inspired action planning system: Abstraction layers for action planning based on psychoanalytical concepts. Archer. M. Nkambou. 23–32. X. 2006.” J. pp. Snaider. S. Other Appl. 89–195. Kruijne. [125] C. T. 8. no. 2012. 497–528. 1. Zachary.Adv. Intell. [141] A. 183–196. 34. Anderson and S. Res. pp. Brooks. Wang. no. (Micro A. “Attention in the ASMO cognitive architecture. “The Logic of Intelligence. 221. Mach. “What Does Your Actor Remember? Towards Characters with a Full Episodic Memory. Ryder. Arrabales. Learn. “Implementing an efficient causal learning mechanism in a cognitive tutoring agent.. D. A. Arrabales. “Consciousness Is Computational: the LIDA Model of Global Workspace Theory. no. [121] J. ... 6. (Micro A. and J. Archer. P.” IEEE Trans. 1–16. Pfleger. Intell..Adv. R.” in Proceedings of the 2nd Conference on Artificial General Intelligence. Lukavsky. 2009. C. Franklin. 2. pp. [130] B. F. 2011.” in Proceedings of the Eleventh International Joint Conference on Artificial Intelligence (IJCAI-89). “IMPRINT/ACT-R: Integration of a task network modeling architecture with a cognitive architecture and its application to human error modeling. Motiv. [137] U. “Self-Programming: Operationalizing Autonomy. vol. U. Conf. Sun. K. 6. Ledezma.. New York. Ment. 2010. U.” in International Workshop on Machine Consciousness. San Diego.” Psychol. Williams. 2014. Nicholson. 41. “Tower of Hanoi: Evidence for the cost of goal retrieval. 45–53. [119] R.” Int. Tsotsos and W. P. and R. “Agents with Shared Mental Models for Enhancing Team Decision-Makings. vol. (Army R. 2011. pp. Theory.. Sanchis. vol. Pešková. D. Shiffrin. 1995. Ind. vol. Bresina and M. 23. Exp. [140] R. [116] R.. Simul. L. A. Psychol. 1. Drummond. 288–301. Zubek. “The Cerebus Project. no. 27. 21. vol. D’Mello. [136] I. K. vol. 1.. Thórisson. pp. and R. Fournier-Viger. emotion. C.. [134] J. pp. “A Computational Model of Machine Consciousness. pp. A. NOV. A. 2006. 1331–1346. Learn. Perner and H.” Decis. Sanchis. and T. . pp. & D. Langley and D. Mem. 4. Appl. “Global workspace theory of consciousness: Toward a cognitive neuroscience of human experience.” Front. 2009. “Developing a multi-tasking cognitive agent using the COGNET/iGEN integrative architecture. J. Atkinson and R. T. and S. R. Ledezma. L. Hanratty. vol. 3. Faghihi. Mach. (Army R.” J. Hitch. Appl. Eng. P.” Prog. Nivel and K. and A. 2011. Motiv. 150. Starzyk and D. [135] M. Learn. Hayes-Roth. Cogn.” in Artificial General Intelligence. S. Auton. [118] J. 6. vol. 13–19. no. C. A computational perspective on visual attention. S. MIT Press. 1974. J. Georgeff and F. (Carnegie M. P. [138] R. [117] M. Zeilinger. T. “CERA-CRANIUM : A Test Bed for Machine Consciousness Research. Inc. Hélie. 98–105. Khoo. A.” 1990. Conscious. Kelley. Morignot. 2001.

Franklin. Marcu. pp. Lansky. and R. 2008. A. 2012. G. Neurosci. 1323–1329. [158] K. Comput.” in Artificial General Intelligence. E. 307–332.” Int. [163] B.” AAAI.. 2012. [154] R. C. and D. Sun. no. Nkambou. P. Faghihi. [145] D.” Proc. [148] M. Firby. Teow. M. P. M. Socash. 2010. Guinea. “Memory systems within a cognitive architecture. Fournier-Viger. pp. vol. N. vol. vol. Artif.-J. Faghihi. 53. Kruijff. 2010. [153] M. C. [156] K. and J. 2007. 2001. Thomas. pp. Brenner. “PRODILOGY/ANALOGY: Analogical reasoning in general problem solving. Sjoo. H.” in 15th International Conference on Information Fusion. “A functional model of sensemaking in a neurocognitive architecture. Lovett. Cogn. “A conceptual framework for predicting error in complex human-machine environments. 5. no. “Integrating Planning and Reacting in a Heterogeneous Asynchronous Architecture for Controlling Real-World Mobile Robots. “ModellingMemory and Learning Consistently from Psychology to Physiology. Goel. Comput.” 1989.” New Ideas Psychol.” Expert Syst. Maxwell. Intell. pp. M. 10th Int. “Meta-case-Based Reasoning : Using Functional Models to Adapt Case-Based Agents. Industrial Informatics (INDIN). [164] S. and J. [165] L. P. P. Systems and Technologies. L. “The Explorer System.” in Proceedings of the 3rd International Conference on Applied Human Factors and Ergonomics (AHFE). Neurosci. vol. Conscious. 16. Zender. 327–333. “Musicog: a Cognitive Architecture for Music Learning and Generation.” in Proceedings of the 4th. Springer Berlin Heidelberg. no. “CogSketch: Sketch understanding for cognitive science research and for education. and C. and D. 2010. Madl. 63–129. “Modeling Visual Search of Displays of Many Objects: The Role of Differential Acuity and Fixation Memory. [144] M. Freed and R. J. W.” in Topics in Case-Based Reasoning. P. and M. Scott-Nash. Syst. Shapiro and J. M. [157] X. Jensfelt. [152] J. Salud. P. 1993. L. 1383–1398. “A Foundational Architecture for Artificial General Intelligence. Concepts.. environment models.” in Proceedings of the 20th Annual Conference of the Cognitive Science Society. A. V. Hawes. Model. “ARDIS: Knowledge-based architecture for visual system configuration in dynamic surface inspection. Lane. pp. pp. Staszewski. Usher. “The Effects of Bounding Rationality on the Performance and Learning of CHREST Agents in Tileworld.” Int. Hum. Archit. and G. [162] M. vol. B.” Res. [160] U. Lebiere. Anderson. 2010. 2. Hudlicka. pp. “The Novamente Artificial Intelligence Engine. pp. Pan. 2011. Dev. 1986. 2016. Inspired Cogn. 30. S. and F. “A cognitive system for adaptive decision making. Kieras. Dong.” in Proceeding of the Human Modeling in Assisted Transportation Conference. J. 4. vol. J. Intell. pp. 74. Taylor. [166] R. P. [159] S. A. C. 4. 3–8. Cogn. 36–54... “The Glair Cognitive Architecture. 2012. Wickens. 28. D. Pronobis. Ng. vol. Murdock and A. 2014. [161] B. 2011. Boicu. 2000. “Modeling Cultural and Personality Biases in Decision Making. 227–240. A. and N. 2. P. International Conference on Case-Based Reasoning. B.. [146] E. K. C.. “Human Modelling in Assisted Transportation. FUSION 2012. 2013. 809–815. Pasquier.” Top. 2011. G. 2010. “A LIDA cognitive model tutorial. Franklin. no. H. B. Cutsuridis.. Hooey. Wetzel.” in Cognitive Systems. D. Strain. 10. 648–666.” Front. 169. Shyr. short-term. U.” in Proceedings of the IEEE. C. [149] N. Ang. Tan. T. “A developmental agent for learning features. “CELTS: A cognitive tutoring agent with human-like learning capabilities and emotions. 63–68. J. M. S. [168] C. 4. “An Experiment in Agent Teaching by Subject Matter Experts. pp. “Procedural Knowledge. Cowan. J. Eds. . A. R. and working memory?. 583–610. Archit. Coward.” Adv. Intell. pp.” in Proceedings of the 9th Sound and Music Computing Conference. Algorithms. Hussain.” in Smart Innovation. no. pp. P. pp. 2007. 1998. Thomson. and J. Sci. M. H. Pennachin. [151] S. Chen. Martin. 3. R. J. Elsevier. [143] G. C. K.” Comput. vol. Gen. Snaider. 353–374. 521–528. Gat.” in Perception-Action Cycle. Agrawal. Cascaval. Paik. Rutledge-Taylor. Pirolli. Brandon. Eigenfeldt. XXXI. and general robotics tasks. 2. L. 2013. Garcia-Alegre. 2011. 2011. Kugele. Stud.. J. Goertzel and C. Veloso. D. Gore. J.” Biol.. “Adaptive Execution in Complex Dynamic Worlds. vol. S. Foyle. N. no. Forbus. R. Conf. pp. M. 1992. Bona. Gobet. Lloyd-Kelly. Georgeff and A. vol. [150] D. [167] R. Chapter 20 What are the differences between long-term. Springer Berlin Heidelberg. Remington. G. 2013. F. [155] E. Tecuci. 63–133. pp. and S. Rincon. [147] W. Mach. Lockwood. Bowman.

” Appl. P. A. 1997. [182] L. pp. “SHRUTI: A neurally motivated architecture for rapid. [180] M. J. vol. “A bottom-up model of skill learning. and L. “Piece of Mind : Long-Term Memory Structure in ACT-R and CHREST. Epstein. C. Krichmar and G. “A cognitive system model for human/automation dynamics in airspace management. and R. no. 1989.. pp. 2001. Szabados. October. 2016. Vepstas. C. Caamano. L.. Manso. Archit. Shastri. pp. Remington. and T. Lloyd-Kelly. “UAV Operator Human Performance Models.. 611–641. Duro. pp. “Theo: A framework for self-improving systems. 4.” Knowledge.” in Proceedings of the IJCAI-03 Workshop on Mixed-Initiative Intelligent Systems. Sci.. [178] M. no. Kieras. von Hofsten. A. [188] R. E. 337–338. J. 2002. J. 57–62. Corker. [171] P. Garcia-Varea. Sixth Int. 183–203. P. M. Calderita. Bustos. and L. Marcu. Fadiga. [177] I.. 3.. pp. 1037–1042. Intell. AFRL-RI-RS-TR-2006-0158.” in A Roadmap for Cognitive Development in Humanoid Robots. “Sentence generation for artificial brains: A glocal similarity-matching approach. G. Model.” in Studies in Computational Intelligence. [184] G. “A procedural Long Term Memory for cognitive robotics. [194] T. 11. E.” Top. “Determining Information Technology Project Status using Recognition- primed Decision. M. no. [172] M. Rodriguez-Ruiz. vol.” 2004. Lebiere. 1998. Bunzo. A. J. Barkowsky. and D. F. L. 2000.’” SIGART Bull. Hawkins. Jaszuk and J.” Proc. [181] J. 2010.” in Proceedings of the First European/US Symposium on Air Traffic Management. 2014. M. Pisanich. [174] S. Chalasani. B. C. 5. B. “Building internal scene representation in cognitive agents. Cline. C. . Starzyk. P. 1. 479–511. Artif. Bandera. vol. Vernon. Deutsch. Laird. 1–2. and B. Recent Trends.” 2016. 2007. Sci. 3. Conf. 778–795. “Mixed-initiative Control for Teaching and Learning in Disciple. 2013. pp.” 2016. P.” AI Magazine. “Cerebus : A Higher-Order Behavior-Based System.. and J. O. Sun. vol. pp. pp. Edelman. Ahmad. R.” in Workshop of Physical Agents. “This time with feeling: Integrated model of trait and state effects on cognition and behavior. 323–356. “The iCub Cognitive Architecture. Ringuette. “For the right reasons: The FORR architecture for learning in a skill domain. Martinez-Gomez.” in Architectures for intelligence. 2010. “Integrating theories of motor sequencing in the SAL hybrid architecture. Creat.. Vinokurov. pp. Stanescu. Cogn. 93–97. Support Syst.. pp. 2. “Toward Integrating Cognitive Linguistics and Cognitive Language Processing. Y.” TRENDS Cogn. 114–118. Freed and R. Sanchez. Soc. no. vol. [176] H. 2007. vol. Erbaum. R. no. “Making human-machine system simulation a practical engineering tool: An APEX overview. I. [192] A. pp. Rep. Merrill. scalable inference. O’Reilly. P. 1–8. E. Lonsdale. “MAX: A Meta-Reasoning Architecture for ‘X. L. “Multimodal Interaction with Loki. 74. “Neural-network models of learning and memory: Leading questions and an emerging framework. 2015. “Brain-based devices for the study of nervous systems and the development of intelligent machines. C. [175] R. R. “Extending Cognitive Architecture with Episodic Memory. and M. vol. Kuokka. 2008. vol. Horswill. 121–153. 2003. Ross. VanLehn. Etzioni. Gobet. Cogn. A.” in Proceedings of the 3rd International Conference on Cognitive Modelling.” Proc. Nuxoll and J. R.[169] D. Queiroz.” in Proceedings of the 37th Annual Meeting of the Cognitive Science Society. 2006. Allen. pp. pp. Laird. 63–77.” in Proceedings of the IEEE Conference on Evolving and Adaptive Intelligent Systems. Benjamin. 1–3. [183] A. and J. 95–103. Schultheis and T.making enabled Collaborative Agents for Simulating Teamwork ( R-CAST ). pp. Hammell. 2011. [187] D. Boicu. Mitchell. L. E. Cheng. [189] M. [191] A. Peterson.” Tech.” in Proceedings of the National Conference on Artificial Intelligence. 2012. E. Liu. Conf. Inf. no. Hudlicka. J. [170] S.” Neurocomputing. and J.” in Proceedings of CONISAR. Lavin. “Casimir: An architecture for mental spatial knowledge processing. Solut. Sci. Boicu. [195] P. and R. Lian. 18. Sci. Salgado. D. L. no. [185] D. Life. Inspired Cogn. [173] D. Goertzel. 16. [193] R. vol. [190] E. 20th Cogn. and P. Herd.. Barnes and R.” Biol. 98–106. Lane. Adv. Ed. F. pp. A. [186] S. Bandera. “Sparse Distributed Representations. 23. J. Santos-Diez. Carpenter. Schlimmer. M. Lindes and J.” Artif.” Cogn. Bellas. 2004. 2005. “ADAPT : A Cognitive Architecture for Robotics An Implementation of ADAPT. “EPIC Architecture Principles of Operation. 3. D. K. Bachiller. M. 8. 2001. 1991. Lyons. Springer Berlin Heidelberg. S. pp. C. 479–491. 1994. A. no. 4. [179] K.

Xiao. Bustos.[196] G. [205] B. J. D. “An implemented architecture for feature creation and general reinforcement learning. pp.” Int. Epstein.” in Proceedings of the International Conference on Artificial Intelligence. Sklar. pp. Intell. Fourth International Conference on Artificial General Intelligence. 1103– 1105. Wang. Pezzulo.” in Proceedings of International Joint Conference on Artificial Intelligence (IJCAI). Romero-Garces. 2007. Conf. 2007.” in International Workshop on Spatial Information Theory. and A. G. 2956– 2965. “SAL: An explicitly pluralistic cognitive architecture.” in Proceedings of the IEEE Symposium on Computational Intelligence. “Designing a Robot Cognitive Architecture with Concurrency and Active Perception. E. G. 2012. Cognitive Algorithms. Jilk. 2008. no. M. W. Lect. 2010. 2014. Cangelosi. A.” Front. 2005. Z. E.” Neural Networks. [199] G. and S. Morignot. ). Castelfranchi. pp. 1. G. Slam. Calvi. pp. De La Cruz. P. C. von Hofsten. 2008. Parsons. Hammer. pp. Kostavelis. vol.” pp. J. Xue. and Y. Theor. 309–316. Sci.” in Proceedings of the 15th International Conference on Information Fusion (FUSION). [216] M. N. [211] A. Exp. J. 7–18. A. and A. T. A. 197–218. [197] P. pp. pp.” Proc. Artif. V. and S. Metta. Balabanovic. P. “Scene Understanding using DSO Cognitive Architecture.. “A pragmatic path toward endowing virtually-embodied AIs with human-level linguistic capability. I.. and M. [208] D. Chan. R. Fadiga. 2011. [213] P. Fernandez. “Improving Agent Learning through Rule Analysis. 446–451. C. O’Reily. S. Appl. vol. 2007. 311–325. Tan. Notes Artif. L.” in Proceedings of the AAAI Fall Symposium on the Intersection of Cognitive Science and Robotics. words and numbers in a cognitive developmental robot. W. Artif. (including Subser. Lonsdale. [200] C. Lect.” Lect. and M. 143–155. Intell. Ng. C. R. Rosander.” in 2005 7th International Conference on Information Fusion (FUSION) Fusing. 2008. M. J. Bauer. M. 93-43. Martinez. Bandera. Lebiere. Edelman. Mind. pp. no. Natale. 2015. C. vol. “MicroPsi: Contributions to a broad architecture of cognition. and P. and M. F. and R. M. pp. “The iCub humanoid robot: An open-systems platform for research in cognitive development. Yu. Tecuci. 20. Conf.” J. Intell.” KSL Rep. 318. Natl. Gobet. Lyons.” in Proceedings of the International Joint Conference on Neural Networks. [198] S. K. “A framework with reasoning capabilities for crisis response . J. Madhavan. [202] J. 1–92. Scheutz. G. 2014. Barbera. 1125–1134. E. Pfleger. V.” in Proceedings of the IEEE International Conference on Imaging Systems and Techniques.” Workshop on Self-Programming in AGI Systems. no. [207] D. and A. L. “Non-Axiomatic Logic (NAL) Specification. 23. P. “Object recognition using saliency maps and HTM learning. Schermerhorn. Kitsantas. Anderson. Bach. G. [206] G. Notes Comput. 1458–1463. pp. Hayes-Roth. [210] G. Di Nuovo. Schlenoff. “Investigating multimodal real-time patterns of joint attention in an HRI word learning task. Gasteratos. “Plans and Behavior in Intelligent Agents. P. Manso. R. [217] L. Calderita.. pp. Nalpantidis. Aroor.. Bernardino. Santos-Victor. 1–10. vol. M. Forbus. 2005. and J. [203] I. 2015. Gen. Samsonovich. “Issues in Temporal and Causal Inference. Vernon. 1993. and L. Schiller and F. Peters.. [215] C. X.” in Human-Robot Interaction (HRI). Nori. and D. Boicu. 5853. Wang. 2012. 2004. Wang. “A General-Purpose Architecture to Control Mobile Robots. [214] A. “DiPRA: Distributed Practical Reasoning Architecture. [212] C. Balakirsky. 2010. and C. [204] B. 22. pp. Vuine. R. Wang and P. Lopes. and Brain.” in Annual Conference on Artificial Intelligence. Boicu. Klenk and K. 2012. 2277– 2284. and P. De Jong.” Science (80-. [209] M. A. Layne Kalbfleisch. Goertzel. D. “Measuring the level of transfer learning by an AP physics problem-solver. Messina. R. Intell. 528–532. “Grounding fingers. L. [201] B. pp. F. K. Garcia. 171. “Learning in and from brain-based devices. Albus. [218] N. 2010 5th ACM/IEEE International Conference on. “A comparison between cognitive and AI models of blackjack strategy learning.” in XV Workshop of physical agents: Book of proceedings. Dabbagh. “Fusing Disparate Information Within the 4D / RCS Architecture. Artif. 3. 2007. “Learning Spatial Models for Navigation. R. Rohrer. Artif. M. 2010. Sandini. K. L. 8–9. Montesano. no. Notes Bioinformatics). “Cognitive constructor: An intelligent tutoring system based on a biologically inspired cognitive architecture (BICA). Evanusa. Benjamin. A. Intell. E. vol. L. No. V. Lalanda.

H. ARL-TR-5020. L. Artif. “The Developmental Approach to Multimedia Speech Learning. Cogn. Campoy. Jonsdottir. J. Thomas. Res. Purcell. “Evaluation and Application of MIDAS v2. and S. [236] J. 27–42. J. and M. [241] C. Ryder. Seth and G. vol. “Cognitive task analysis and modeling of decision making in complex environments. M. Ogasawara and S.” in Proc. 226– 330. D. Redding.” Cogn. Laird. 2011. 2007.” Biol. Kenny. Langley. Thorisson. R. Abounader. “Cognitive Robotics Using the Soar Cognitive Architecture. [234] J. pp. 46. [223] J.. no. [225] S. 2015. Dalal. H. Appl. 167–175. Salas. Psychol. G. no. Olivares-Mendez. Syst. E. pp. Hicinbothom. Dahn. Metta. “Integrating intelligent computer generated forces in distributed simulations: TacAir-Soar in STOW-97. “Robot Emotion: A Functional Perspective. M.” in Proceedings of SIGDIAL Conference. pp. E. 3. 51. 2001-01-2648. no. Government and Aerospace Simulation Symposium.. H. vol. Mitchell.. Bignoli. “Planning Using Multiple Execution Architectures. J. Chetouani. O. Brooks.. pp. 1994. S. 1999. International Congress of Brain Inspired Cognitive Systems BICS. Ryder. [231] W. Aviat. [240] C. Ambinder. Corker. A. J.” AI Mag. Russell. Xu. Y. 2008. A. Thorisson. “Automated Intelligent Pilots for Combat Flight Simulation. Shapiro. [242] S. Dev. Auton. “Cognitive architecture for mixed human-machine team interactions for space exploration. 4. 2004. vol. Jones. J. vol. inspired Cogn. R. Syst. vol. Epstein. R. and P. pp. H.” Int. [229] L. 2011. E. [228] J. no. [226] D. “Man-Machine Design and Analysis System (MIDAS) Applied to a Computer-Based Procedure-Aiding System. 4.” in Proceedings. [219] M. 3–4. Martinez. L. 910–33. P. M. [243] U. Gislason. [224] D. 1261–1274. 266–271. and Cost. 42. 2016. K.” Neural Comput. Nielsen. B.” Hum. Performance.” Eng.-B. M. vol. and G. Huntsberger. P. 4. Anzalone. W. J. K. J. “A Multiparty Multimodal Architecture for Realtime Turntaking.” in Who Needs Emotions?: The Brain Meets the Robot.0. Hoecker. 2009. Laird.” in Proceedings of the 6th International Conference on Cognitive Modelling. Speech. pp. J. 12. Intell. M. P. Mohan. Ment. Weng. no. 1999. Aviat.. “Modeling Human Performance: Impacting System Design. D.” J. M. C. 139–144.” in Proceedings of the Human Factors and Ergonomics Society 39th Annual Meeting. F. 281–292. of the IEEE International Conference on Acoustics. 2000. “Learning to balance grounding rationales for dialogue systems. 17–29. P. M. Kurup. 2004.. Atencio. Wickens. 3–4. 2003. Laird. “A Procedure for Collecting Mental Workload Data During an Experiment That Is Comparable to IMPRINT Workload Data. J. P. pp. [220] G. [238] V. K. Sigaud. and Signal Processing. M. 17.. A. 3. 195–199. [232] T. Mccarley. Shachter.” in Proceedings of the Military. decision-support systems. 2001. L. Cannon. and J. Allender. Z. A. G.. 346–353. Zachary. [227] J. Tikhanoff. J. pp. R. E. V. and F. “Distinguishing causal interactions in neural populations. “Integration of speech and action in humanoid robots: iCub simulation experiments. [221] A. Edelman. Scally. 20. Seamster. Henry. 2012.. Fincham. Model.” SAE Tech. L. M. Jones. [239] K. “Attention- Situation Awareness (A-SA) Model of Pilot Error.. vol. pp. Cogn. 213–239. [233] C. pp. and H. 1993. 1998. “Cognitive task analysis of expertise in air traffic control. M. Syst. Auton. E.” in International Conference on Intelligent Virtual Agents. no. Washington. Laird. J. and E. “Toward sociable robots. R. and C. R. K.” in Proceedings of the 5th International Conference on Autonomous Agents. S.. Gordon and S. Alexander. and J. O.” Rob. 59–64. J. 19. Anderson. R. G. A. 1998. vol. pp. Cassimatis. 315–344. Cangelosi. Zheng. Nielsen. DC. . G. pp. E. vol. pp. 2001. L. “Using Background Knowledge to Speed Reinforcement Learning in Physical Agents.” Adv. 1993. Koss. Perform. “Multimodal people engagement with iCub. [237] J. J. K. Hart. Cannon-Bowers and E. R. Kirk and J. Archit. 2011. 2005. [235] R. Breazeal and R. Roth. and S. Evans. and J. V. and M. Eds. and N. E. Kenny. Cannon-Bowers. Ivaldi. pp. 1. Albert. no.” in Proceedings of the International Joint Conference on Artificial Intelligence.” IEEE Trans.” in Making decisions under stress: Implications for individual and team training. Pap. Koss. K. and C.” Tehcnical Rep. “An architectural framework for complex cognition. pp. 2011. “Tracing problem solving in real time: fMRI analysis of the subject-paced Tower of Hanoi. A. T. “Fuzzy-4D/RCS for Unmanned Aerial Vehicles.. Coulter. R. Breazeal. P. Kinkade.. [230] D. E. vol. Salas. M. Lee.” in IEEE Aerospace Conference Proceedings. 254–261. “Learning General and Efficient Representations of Novel Games Through Interactive Instruction. Mondragon.” in Proceedings of the 1998 Spring Simulation Interoperability Workshop. and M.. and R. I. 2010. K. 1. Lipner. Coulter. Neurosci. [222] T. 2010. 8. S.

[264] J. and G. [247] H.” PhD Thesis. [246] H. Cassell and K. [266] S. “Continuous online sequence learning with an unsupervised neural network model. pp. vol.. J. Vasanelli. 2010. and L. S. Bida. I. Imaging. Appl. M. pp. Control. Goertzel. Ogunbona. [257] A. Rosenbloom. P. Ikle. 1. Gemrot. Y. 10512–10519. 3. Xiao. Havlicek. 185–192. Mollick.” Neurocomputing. Inspired Cogn. “Intelligent Real-Time Network Management. “Continuous Phone Recognition in the Sigma Cognitive Architecture. 2009. vol. 519–538. Intell.7591. [255] W. [249] G. Chizuwa. Weng. 2079–2092. Conf. Hawkins. Belayneh. pp.” in Proceedings of the International Joint Conference on Neural Networks.. [248] Y. Mai. Ames and S. 562–572. Herd. [252] H. Blumenstein. and G. 2001. Archit. Mollaghasemi. Rosenbloom.” in Proceedings of the 21st Australasian Joint Conference on Artificial Intelligence: Advances in Artificial Intelligence. Yu. “A Framework for Resolving Open-World Referential Expressions in Distributed Heterogeneous Knowledge Bases. no. Epstein. 3. “Guiding probabilistic logical inference with nonlinear dynamical attention allocation. Georgiopoulos. and J. arXiv1512. Gaddam. Z. “Fault diagnosis of pneumatic systems with artificial neural network algorithms. Zhang and J. 7.” Biol. Ustun. pp. vol. 2009. Burkert. Z. vol. Ahmad. “Grounded auditory development by a developmental robot. 2015. Thorisson. I. Faichney. [253] C. E. B. Joshi. Cao. “The power of a nod and a glance: Envelope vs. J. Mingolla. Soc. [263] A. and J. 153– 178. S. [260] S. Kameyama. Work. vol.. 2013.” in International Conference on Artificial General Intelligence. “Isolated word recognition in the Sigma cognitive architecture. T. [258] C. Vansa. S. “Goal-Driven Cognition in the Brain: A Computational Framework. Pibil. P. Zhang. N. 2012. Fazl. Naghdy. no. [256] J. no. vol. and C.. 2015. “Image classification using HTM cortical learning algorithms. pp. “Extensions and applications of Pogamut 3 platform. J. Kaylani. Hine. Kilic. 6. Y. Zhuo.” in Proceedings of the Tenth International Conference on AI. [262] X. Siciliano. pp. no. vol. pp.. Artif. emotional feedback in animated conversational agents. 1999. Stolc and I. and V. R. [259] J. [245] T. pp. pp. “Character Recognition Using Hierarchical Vector Quantization and Temporal Pooling.” Neural Networks. Ustun. [265] R.” J.” in Proceedings of the 21st International Conference on Pattern Recognition (ICPR). 69. and T. M. Cui. “Wavelet-based feature-adaptive adaptive resonance theory neural network for texture identification. G. 2010.” in Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence. Anagnostopoulos. Distante. no. vol.” in Proceedings of the 10th IASTED International Conference on Artificial Intelligence and Applications (AIA). Joshi. pp. Acoust. Comput. M. Zhang. A. Zemcak. “Odor discrimination using adaptive resonance theory. and S. Williams and M. pp.” Appl. Qin. George. 4–5. vol. “Evaluation of hierarchical temporal memory for a real world application. Wang. 23. Jin. Mackie. Yu. and M. 1997. pp. pp. 13. 1991. Archit. Demetgul. “Speaker normalization using cortical strip maps: a neural model for steady-state vowel categorization. Melis. 3. P. O. 2452–2455. Scheutz. [250] A. Innov. 4th Int.05463. 2009. J. [261] W.” in Proceeding of the IEEE International Conference on Robotics and Biomimetics (ROBIO). 3918–3936. R. recognition. Yang. 2014. 435– 451. C. Brom. arXiv1404. S. Tansel. 2009. A.” Sensors and Actuators. 1759–1764. Hazy. 2009.” arXiv Prepr. vol. and search: How spatial and object attention are coordinated using surface-based attentional shrouds. C. no. Bajla.” Proc. Y. Harrigan. Carpenter and S. and Y. vol. Grossberg.” Int. M. 329–336. pp. 2016. 72. [267] R. Expert Systems and Natural Language. Taskin. no. Inf. Electron. S. 2014. 2001. View-invariant object category learning. 36. M.” arXiv Prepr. and E. L. “AG-ART: An adaptive approach to evolvong ART architectures. 2008. 2016.” Expert Syst.” J. Rao and M. J. 58. 2008. C. [251] M. 248–252. “Biased ART: A neural architecture that shifts attention toward previously disregarded features following an incorrect prediction. pp. “Simple Perception-Action Strategy Based on Hierarchical Temporal Memory. Virtual Agents. “Intelligent Reasoning on Natural Language Data: A Non-Axiomatic Reasoning System Approach. Kadlec. S. 124.” Biol. C. Grossberg. 238–241.. L. and V. [254] Y.[244] O. 2014. Intell. Inspired Cogn. “Application of the computational intelligence network based on hierarchical temporal memory to face recognition. Am. 10. P. R. 1–9. C.” Mach. 5360. Thornton. and S. that Learn to Play games. P. 144–147. O’Reilly. “Learning to Play Expertly: A Tutorial on Hoyle. and P. 2000. . X.

[282] J. [285] R. Albus. and A. Graph. McNeese. pp. 2009. 253–262. and G. Brooks. García-Sánchez. Fan. S. “NIST research in autonomous construction. vol. Bostelman. “Interactively Learning Strategies for Handling References to Unseen or Unknown Objects. A.” Tech. 2009. pp. 2015. 5. 2016. P. “Probabilistic Logic Based Reinforcement Learning of Simple Embodied Behaviors in a 3D Simulation World. vol. A. Congdon. 55–71. no. Tan. Wendt.. R. “Behavior Coordination Mechanisms. 1–29. F. [291] A. R. Conf. 2008. Artifical Intell. Busetta. N.” Biol. E. W.” Cogn. Theor. Lytle and K. [273] P. R. B. J. pp. “A Parallel Distributed Cognitive Control System for a Humanoid Robot. “Modelling a Human-Like Bot in a First Person Shooter Game. pp. [284] A. [277] J. pp. York University. Prokopowicz. Á. [288] X. A. Norcross. pp. pp. 2–13. Park. Kawamura.” in Proceedings of the 21st Annual Conference on Innovative Applications of Artificial Intelligence. and D. 2012. E. . Interfaces Comput. vol.” Neural Comput.. [278] R. IECON 2013-39th Annu. Visual Attention in Dynamic Environments and Its Application To Playing Online Games. Mora. Exp. Res. Rivers. pp. Proctor. Clifton. Shiwali and J. Saidi. 65–93. Heljakka. E. C. “Analogical Generalization of Actions from Single Exemplars in a Robotic Architecture. [271] D. Peters. Auton. N.. Laird. Sun. V. Appl. J. W. Herret. Goertzel. “A robot that walks. and P. L. 1997. 1999. and A. Evertsz. Rep. Ng. Schmidhuber. Pennachin. 157–174. A. 22. and I. Sun. E. no. Gelbard. P. “Hybrid laser and vision based object search and localization. M. vol. S. variability.. [272] N. Swain.” Adv. Argall. and J. pp. 253–275. [270] D.. Syst. 2009. and J. “Chuck Norris rocks!. Murphy. Metta. [287] J.. Schaat. [289] F. 173–178. 60. G. R. 1. 9. Doblhammer. Wilson. Archit. 1988. B.” in Proceedings of the 2016 International Conference on Autonomous Agents & Multiagent Systems. [281] K. C. Jensfelt.[268] A. 2012. S. vol. Spratley. IEEE Int. “A psychoanalytically- inspired motivational and emotional system for autonomous agents. Conf. 69–74. K. Pedrotti. G. 1. Goertzel. and P. Robots.” in Proceedings of the IEEE Symposium on Computational Intelligence and Games. [286] K. A.” J. Humanoid Robot. D. 2013. “Learning to play Mario. Sauser. R. “RPD-enabled agents teaming with humans for multi- context decision making. education. 660–666. “Delivery of an Advanced Double-Hull Ship Welding. Small and C.” in 2009 IEEE Congress on Evolutionary Computation. B. Bodenheimer. Aisa. no. Senna. G. L. Kase. and applications. Bittner. Laird.. “CAD directed robotic deburring. [280] D. Jacoff. Ulam. Van Hoorn.” in Proceedings of the International Conference on Autonomous Agents. “Hierarchical controller learning in a first-person shooter. J. 2–3..” Proc. pp. Creat. Castillo. J. M. pp. emergent behaviors from a carefully evolved\nnetwork.. CCA-TR-2009-03. Wang. Intell. Silva. A. Rep. Yen. and M. N. [274] S. A. 2006. Inspired Cogn. [279] A. J. Bruckner. 2007. Togelius. J. 2009. Kotseruba. and J. J. M. “An Architecture for Vision and Action. 157. 211–221. 2016. Mininger and J. Syst. “Reflection in Action: Model-Based Self-Adaptation in Game Playing Agents. 2004. Goel. S. “CoJACK: A high-level cognitive architecture with demonstrations of moderators. Kahn. vol. “The NIST Real-time Control System (RCS): an approach to intelligent systems research. Wintermute. 2009. M. and R. 1999. M. 2012.” Front. 2004. Firby. 6. M. vol. Ritter. Jones. [283] E. Cogn. pp. [290] S. and J. 1. 1995. P.. vol. CEC 2009.” in Third International ICSC (International Computer Science Conventions) Symposia on Intelligent Industrial Automation and Soft Computing.” Auton. E.” in Proceedings of the 14th international joint conference on Artificial intelligence (IJCAI’95). 6648–6653. L. 1.” Int. F. MSc Thesis. pp. “Agent Smith: Towards an evolutionary rule-based agent for interactive dynamic games. Cuadrado and Y. 3. no. Subagdja.” in Challenges in Game Artificial Intelligence: Papers from the AAAI Workshop. Artif.” Ind. Bunch. 1. “Creating human-like autonomous players in real-time first person shooter computer games. [269] R. 19–20. C. 294–301.” in 2009 IEEE Symposium on Computational Intelligence and Games. “Imagery in cognitive architecture: Representation and control at multiple levels of abstraction.. A. López.” Int. Pirjanian. [276] I. 21–37. Krause. 1989. Merelo. vol. R. and F. 2016. B. 2.” in Proceedings of the second international symposium on robotics and manufacturing research. Soc.. and M. Paul.” Tech. vol. R. and implications for situation awareness. Syst. Electron. Sarkar. Sjö. B. [292] P. Saez. [275] M. “Iterative learning of grasp adaptation through human corrections. K. Billard. Autom. Robot.” Rob. J. I. pp. IRIS-99-375. E. no. 2007. Scheutz.