Professional Documents
Culture Documents
Due: 11/04/2016
Embodied cognition and Dynamic/Complex System Theory have a come a long way
from the early robust interpretations of cognitive science. Both methodologies emphasize some
type of interaction between the mind and the body or the mind, the body, and the environment
itself. That is to say, cognitive processes can be deeply traced back towards these key
interactions of the physical body towards cognitive phenomena. To fully understand both
methodologies, we must delve into their main principles as well as their relation to their
First and foremost, Embodied Cognition has a distinct set of principles as well as a
variety of views that exist within the methodology itself. Nonetheless, the merging viewpoint of
embodied cognition holds that cognitive processes are deeply rooted in the bodys interactions
with the world (Wilson,2002). We can go farther and even say that the starting basis for this
theoretical stance is that this is not a mind working on abstract problems, rather a body that
requires a mind to make it function (Wilson,2002). Generally, this entails that the body is a key
component in shaping the mind. All the interactions that take place are of necessary value to
explain how certain cognitive processes arise. Naturally, however, this wasnt the case
previously since most branches of the cognitive sciences argued that the mind was essentially an
abstract information processor whose connections to the outside world were of little theoretical
importance (Wilson,2002). It was thought that perceptual and motor systems were not relevant to
understanding central cognitive processes and that they were merely input and output devices
(Wilson,2002). Despite the critical acclaims on the interactions of the physical body, Embodied
Cognition has persevered onwards and has gained momentum in establishing a sense of
perpetual communion between the physical and the mental on the sole basis that this theory
emphasizes sensory and motor functions, as well as their importance for successful interaction
with the environment (Wilson,2002). In essence, in order to fully understand the mind, we must
also understand its relationship to a physical body that interacts with the world (Wilson,2002).
To further build upon this notion, we can also argue that this phenomenon can be attributed to
our evolution from basic primates to the more modernized human. It is said that neural resources
from early primates were specifically devoted to perceptual and motoric processing and our
cognitive activity consisted largely of immediate on-line interaction with the environment
(Wilson,2002). This in turn houses the idea that instead of being centralized, abstract, and
distinct from peripheral input and output modules, human cognition may instead have deep roots
understanding of what Embodied Cognition could potentially be. It has been known that a great
deal of diversity among Embodied Cognition alone has risen from the more generalized theory of
a mind interacting with a body which in itself interacts with the world. We can introduce a set of
claims for Embodied Cognition that promote a general subclass of the methodology which can
also potentially dictate its own value among the general methodology. The claims are as follows
(Wilson,2002):
Cognition is situated
Let it be known that a formal review of a couple claims will be covered briefly, but each claim
embeds the interactions of mind, body, and environment. The first of these claims is the idea that
cognition is situated. That is, while a cognitive process is being carried out, perceptual
information continues to come in that affects processing, and motor activity is executed that
affects the environment in task-relevant ways (Wilson,2002). A few examples include, driving,
running, or talking with someone. To further simplify the concept, situated cognition involves
interacting in some sense with the things that the cognitive activity is about (Wilson,2002). This
generally entails that cognitive activities such as planning, day-dreaming, or even remembering
are not situated. These activities do not interact with things that are planned or even dreamed
about. Onward towards the second claim is the fact that cognition is time pressured. That is,
situated agents must deal with the constraints of real time or runtime (Wilson,2002). One
can definitely see the existing proposition of this statement in reality itself. The reason being that
a creature in a real environment has no such leisure to build and manipulate internal
representations of a situation with an indeterminate amount of time. The creature must cope with
predators, prey, stationary objects, and terrain as fast as the situation calls for it (Wilson,2002).
As one can see, an activity such as this would require real time responsiveness to feedback from
the environment. Certain activities may not seem intelligent in and of themselves, but one can
argue that greater cognitive complexity can be built up from successive layers of procedures for
real time interaction with the environment (Wilson,2002). In essence we can directly state that
time can shape situated cognition. This entails a representational bottleneck which states that
when a situation calls for fast and continuous evolving responses, there may be simply not
enough time to build up a full blown mental model of the environment from which to derive a
plan of action (Wilson,2002). Instead we drive a cheap and efficient method of action for
generating situated-appropriate action on the fly. However, if time pressure was not evident in
the so called situation then we obtain the opportunity to assess the overall problem and make
the necessary steps to adjust for the situation in a well thought out plan.
While Embodied Cognition has become a reputable methodology to try and explain
cognitive processes with respect to the interactions between the body, the mind, and the
basis that the brain can be understood as a complex system or network in which mental states
emerge from the interaction between multiple physical and functional levels (Bassett &
Gazzaniga, 2011). Many of the factors we need to consider include complexity and multiscale
organization as well as spatial and temporal scaling. To further inquire, we need to understand
the mind-brain connection that exists along with what is known about the structure of the brain
and its organizing principles. To put it simply, the brain is a complex temporally and spatially
multiscale structure that gives rise to elaborate molecular, cellular, and neuronal phenomena that
together form the physical and biological basis of cognition (Bassett & Gazzaniga, 2011). It
should also be noted that the structure within any given scale is organized into modules (Bassett
& Gazzaniga, 2011). This allows for the basis of cognitive functions that are primed to be
adaptable to any discrete changes within the environment. Furthermore, spatial and temporal
scaling display similar organization at multiple resolutions. Looking at spatial scaling alone, cells
within the human brain are heterogeneously distributed throughout the brain (Bassett &
Gazzaniga, 2011). That being said, we can also infer that the connections between the
subcomponents of the brain are also heterogeneous. Looking at temporal scaling, this is more
closely associated with the rhythmic nature of the brain during neuronal activity. These rhythms
vary in frequency and evidently relate to different cognitive capacities. To name a few, the
highest frequency gamma band is greater than 30 Hz and is thought to be associated with
cognitive binding of information from various sensory and cognitive modalities (Bassett &
functions. However, other key components exist within the Complex/Dynamic System Theory
which further includes modularity and emergence. To briefly state what these two concepts bring
to this methodology is the sole proposition that the entire brain system can be decomposed into
subsystems and modules. Emergence, however, plays with the idea that behavior, function and/or
other properties of the system are more than the sum of the systems parts at any particular level
or across levels (Bassett & Gazzaniga, 2011). Further inquiries into both of these concepts will
be discussed further along this paper. Nonetheless, Dynamic/Complex Systems Theory revolves
around the idea between brain function and the physical properties that manifest during cognitive
processes.
we can do so in terms of Marrs levels of analysis. Embodied Cognition closely resembles the
Implementation level of analysis and Dynamic/Complex Systems is much more situated at the
algorithmic level. However, we must understand the basic principles of Marrs levels of analysis.
Let it be known that this methodology has withstood the test of time since its still the canonical
scheme for organizing formal analyses of information processing systems for over thirty years
after they were first introduced (Griffiths, Lieder, & Goodman, 2015). Looking at the levels
themselves, we have the abstract characterization of the computational problem being solved or
the computational level. We then have the algorithm executing that solution or the
algorithmic level (Griffiths, Lieder, & Goodman, 2015). Finally, we have the hardware
implementing that algorithm or the implementation level (Griffiths, Lieder, & Goodman,
2015). While Marr has contributed greatly with this well-known methodology, his contributions
have a far more reaching impact than what is known and that's the fact that it is valid, fruitful,
and even necessary to analyze cognition by forming abstraction barriers, which result in levels of
analysis (Griffiths, Lieder, & Goodman, 2015). That is, forming layers of cognition incite a
better analysis of the entire process as a whole. This generally provides a means of carefully
observing the cognition process and detailing new concepts such as embodied cognition and
Dynamic/Complex systems based on the levels of analysis. The main argument here, however, is
that we can assign each conceptual framework with a level of analysis. More so, Embodied
Cognition as the Implementation level and Dynamic/Complex as Algorithmic. How this is done
on the sole proposition that the interactions between the mind, the body, and the environment are
as relevant as the connections that occur within the human mind. While we briefly covered
certain claims of Embodied Cognition, lets go over one of the few claims that closely explains
the cognitive process despite its low popularity from the cognitive science community. We will
be combining both the idea that we off-load cognitive work onto the environment and offline
cognition is body based since both claims build on one another. Taking a look at how off-loading
works, we need to understand that we frequently choose to run our cognitive processes off line.
While this is true most of the time, some situations force us to function on line (Wilson,2002).
This presents a key question which is to determine how we handle our cognitive limitations in
these on line situations. One of the main answers to this question is that we just break down. We
begin to panic and feel hopeless in these situations. However, humans do have innate strategies
to confront these types of confrontations rather than just falling apart. First and foremost, we
have the ability to rely on preloaded representations acquired through prior learning
(Wilson,2002). In the case of unknown and new situations, we have a second option. That is, we
can reduce the cognitive workload by making use of the environment itself in strategic ways.
This allows us to leave information out there in the world to be accessed as needed rather than
taking time to fully encode it and using epistemic actions to alter the environment in order to
reduce the cognitive work remaining to be done (Wilson,2002). By this basis alone, we can see
algorithm that has given us the ability to reduce our cognitive workload by off-putting some
information onto the environment. For example, doing mathematical calculations on a paper is a
solely on mental representations within the mind is one feat that would require a miracle,
processes needed to determine an insignificant piece of the answer. This allows the individual to
then focus on the following necessary concepts to apply on the problem to finally reach the
solution rather than devote an entire cognitive process on just one calculation. Put it simply, less
mental works gives us more time to efficiently solve a problem by shoving cognitive process
onto the environment and utilizing it on a need to know basis. Now, we can look into what it
means for off-line cognition to be body based. As described earlier, during on line situations we
have the capacity to draw on preloaded representations acquired through past experiences. These
preloaded representations, of course, have been acquired through offline cognitive processes
throughout our lives. Nonetheless, we use this to manipulate the environment to help us think
about a problem. An example of this can be demonstrated through air hockey. During the start of
a game, without necessarily being conscious of your actions, individuals have the tendency to
place their paddles on the surface and slide it around from left to right or in circular motion.
While the action in itself might seem odd, it primes your motor programs to prepare for the
upcoming game but no overt movement occurs. If this mental activity succeeds, then this
provides a plethora of cognitive strategies for you to prepare for the game, regardless of whether
you're defending your goal or attacking the enemy goal. This suggests that many centralized,
allegedly abstract cognitive activities may in fact make use of sensorimotor functions in exactly
this kind of covert way (Wilson,2002). In general, the function of these sensorimotor resources is
to run simulation of some aspect of the physical world, as a means of representing information or
drawing inferences (Wilson,2002). Going back to air hockey, those swift movements at the
beginning of the game can be regarded as a simulation of the possible trajectories that can
occur when the puck is finally in motion. The puck will always travel in a straight line within
this game, so the best way to protect your goal is to anticipate the motion of the puck and
synchronizing your early movements at the beginning to incur an intersection between the
motion of the puck and the movement of your paddle at the start of the game. We can also claim
that these swift moments provide a means of off-loading cognitive work onto the environment by
having these motions become autonomous so that when the puck is finally set in motion, the
reaction time of the individual and the cognitive process to perform an action in response to the
puck becomes greatly reduced. In association with that claim, the entire process is body based
since the interactions between the body and the mind and the environment become extremely
relevant. Cognitive processes are then implemented through these interactions so the resulting
system and methods are more relevant towards Marrs Level of Implementation. When it comes
Level of Algorithms. As we know, Dynamic/Complex Systems rely on the idea that the brain is a
complex system in which mental states emerge from the interaction between multiple physical
and functional levels (Bassett & Gazzaniga, 2011). Essentially, its the modeling framework that
defines a complex system in terms of its subcomponents and their interactions, which in turn
form a network (Bassett & Gazzaniga, 2011). This system can also be characterized as more than
the sum of its parts based on its overall behavior (Bassett & Gazzaniga, 2011). This is evident
modularity, we see that the entire brain system can be decomposed into subsystems. That is, the
brain contains a variety of modules that are interconnected within the system. Furthermore, we
can also argue that each of these modules is composed of elements that are more highly
connected to other elements within the same module than to elements in other modules (Bassett
modules and enhances robustness (Bassett & Gazzaniga, 2011). Modularity, at its core, provides
a multitude of suggestions and abilities as we decompose the system further into parts. As we
break apart the system further and further, we establish a hierarchy in where certain modules are
established as a priority compared to other modules. If you combine this concept with
modularity, this allows for the formation of complex architectures composed of subsystems
within subsystems within subsystems that facilitate a high degree of functional specificity
(Bassett & Gazzaniga, 2011). Looking back at the bigger picture, we now have two
enhancements that modularity brings to the table which are the enhancement of robustness and
specificity. However, modularity can also facilitate behavioral adaptation because each module
can both function and change its function without adversely perturbing the remainder of the
system (Bassett & Gazzaniga, 2011). Thus by reducing constraints on change, we form the
structural basis on which subsystems can evolve and adapt in a highly variable environment
(Bassett & Gazzaniga, 2011). To put it into perspective, we can regard modularity as a type of
cognitive algorithm used for our benefit. Since algorithms are generally composed of ever
adapting variables for the execution of a solution, the further we strip the human brain into
subsystems the more modules that arise, each with its own set of functions. Like discussed
previously, each module can both function and change its function without changing what the
entire system does in general. Thus, the cognitive algorithm that arises allows us to skillfully and
efficiently react to such a variable environment. This allows us to analyze the nature of the
need to analyze a few experiments that have been carried out. The first experiment delves in how
development of infants can be regarded as a dynamic system. The way to explain this concept is
through the idea of Emergence. That is, the coming into existence of new forms through ongoing
processes intrinsic to the system. Thus the question being tackled is how does the human mind
with all its power and imagination, emerge from the human infant; a creature so unformed and
helpless (Smith & Thelen, 2003)? To be more specific, we can go further and ask when do
infants acquire the concept of object permanence (Smith & Thelen, 2003)? At this point, the
cognitive functions being investigated is how the brain develops in a way to obtain information
out of nothing substantially. The hypotheses here is the fact that this phenomenon can be directly
linked to multicausality and nested time scales. What these two assumptions entail is the fact that
in multicausality, developing organisms are complex systems composed of very many individual
elements embedded within and, open to, a complex environment. Thus the coherence between
these elements is generated solely in the relationships between the organic components and the
constraints and opportunities of the environment (Smith & Thelen, 2003). This self-organization
means that no single element has causal priority (Smith & Thelen, 2003). However, in nested
time scales, the behavioral change occurs over different time scales. That is, behavior is
dependent on the time scale in which the behavior itself arises and becomes utilized. In this case,
every neural event is the initial condition for the next slice of time. Every cell division sets the
stage for the next (Smith & Thelen, 2003). This coherence between these levels of the complex
system must mean that the dynamics of one time-scale must be continuous with and nested
within the dynamics of all other time-scales (Smith & Thelen, 2003).
The method by which the experiment goes about studying both the multicausality and
nested time scales is by adopting a simple A-Not-B error experiment within infants. Thus, the
experiment presents a study in which 10 month olds and 12 month olds are given a simple task to
observe the dynamic system of developmental psychology. In essence, the task for the infants is
as follows: The infant is shown a tantalizing toy and is hidden under a lid at location A. The
infant then reaches for the toy and the process is repeated several times. Then a plot twist in the
experiment occurs in which the toy is then hidden at a different location B. The plot twist results
in a "curious" error in which then the infant seeks the toy in the original location A even though
they are aware of the toy being hidden at location B. This error normally arises in infants
between 8 to 10 months old. However, this error is dramatically reduced in 12 months old infants
(Smith & Thelen, 2003). The question is then asked: "What causes this paradigm shift in
understanding within these two months? " (Smith & Thelen, 2003). As detailed in the article, it
was suggested that during that two-month period, infants shift their representations of space,
change the functioning of their prefrontal cortices, learn to inhibit responses, change their
The bodily and neural systems that interact with each other and with the environment is
naturally the brain, the arms, and the 5 senses. The model used in the article is neurally
represented since much of the decision and activations are done neurally. The memory capacity
of the infant is also considered throughout the study as it plays the role on how effective the
environment influences the neural activity of the infant. However, it is worthwhile to note that
the error that occurs in 8 to 10 months old infants is attributed to the hesitation that occurs within
the time frame from which the object is hidden to when the infant begins to reach for the toy
(Smith & Thelen, 2003). While the visual input is acquired, the influence of the environment
itself presents a much higher impact on the stimuli activation within the child's mind. The
moment to moment dynamics of behavioral and/or neural activity is the multi causality as well as
the nested time scales that can be observed within the study. Both dynamics were measured and
the way to measure them is to understand the coherent behavioral patterns that arise within the
individual. Since multi causality is generated solely on the relationship between the organic
components and the constraints and the opportunities of the environment, observing how these
patterns change in reaction to the limitations inflicted by the environment towards the self-
organizing structure of these organic components, we can significantly measure the essence
The second experiment dwells on the influence of evaluation on speed and motor
movements. That is, we wish to evaluate the relationship between thought on one hand and
perception and action systems on the other. Due to this, we suggest that performing actions
(Markman & Brendl, 2005). For example, there have been various case studies that have found
that movements of the arm are related to an individuals evaluation. Thus, the question being
asked is whether or not we can use body movements to accurately indicate the relationship
self. To explain further, we need to understand the nature of the experiment. In essence, the
overview of the experiment issues a representation of ones self onto the screen by placing the
individual's name in a digitally generated corridor of indeterminate depth. Words then pop up in
front or behind the individuals name at a certain distance away from the name. If we assume
that evaluations are connected to movement representations directly, then people would be faster
to pull positive words towards their bodies and to push negative words away from their bodies
regardless of the position of their names (Markman & Brendl, 2005). However, if the opposite is
true, in which in this case body movements are made relative to a persons representation of self,
then positive words would be moved more quickly toward the name than away from the name
and negative words would be moved more quickly away from the name than toward the name
(Markman & Brendl, 2005). Essentially, it was predicted that participants of the experiment
would be faster to move positive words toward their name and faster to move negative words
away from their name (Markman & Brendl, 2005). Simply put, body movements are made based
The method by which this is determined was very specific and precise. The participants
themselves were 108 German-speaking students from the University of Konstanz. Three
independent variables coexisted in this study. The first being valence (positive vs negative
words), movement direction (push vs pull), and instruction set (positive toward/negative away vs
positive away/negative toward) (Markman & Brendl, 2005). Nonetheless, the valence and
movement directions were manipulated within the participants and the instruction set was
manipulated with each participant. The only dependent variable was response time to initiate the
movement of the lever. The number of stimuli were 23 positive and 23 negative German words.
The subjects were then sat on a computer and were asked to grasp the arm lever with their
dominant hand (Markman & Brendl, 2005). Then, as discussed before, a representation of ones
self is displayed onto the screen by placing the individual's name in a digitally generated corridor
of indeterminate depth. Words then pop up in front or behind the individuals name at a certain
distance away from the name. Then each subject was randomly assigned to either the positive
toward/negative away condition or the positive away/negative toward condition (Markman &
Brendl, 2005). The response times were measured in milliseconds and were determined by the
moment of onset of the stimulus to the point when the lever was moved 0.208mm (Markman &
Brendl, 2005). The results achieved after the experiment showed that participants were faster to
move positive words toward their name than away from their name regardless of whether this
response required a pushing movement or pulling movement (Markman & Brendl, 2005). This
suggests that previous prediction was true. As humans, our movements tend to base around what
we consider ourselves depending on what the representation can be. Normally when we
consider ourselves, we tend to believe it means that inner representations lie within our physical
bodies or in the center of our physical bodies, but as this experiment shows, this is not always the
case. Thus, abstract representations of our self, have a key influence over our body movements
rather than the body itself. That is, we can necessarily dictate our body actions to revolve around
look back at the movement of the arm and the lever. As indicated earlier in the experiment, we
representations and what represents ourselves. Thus, if the speed of participants movements was
driven by their representation of their bodies rather than their representation of themselves, they
would of been faster to PULL the lever than to PUSH positive words and faster to PUSH the
lever than to PULL negative words (Markman & Brendl, 2005). However, the results seem to
show the opposite as the movement direction of PUSH and PULL with respect to their bodies
well as a faster response time to move negative words away from their name. The indication
whether a push or pull made any influence was practically nonexistent. The main bodily systems
that had to interact with the stimuli were the arm movements only. The arm was made to grasp a
level with the only option to pull or push given whatever instructional set was in place. The
stimuli it had to react to was the fact that the words that popped up into the computer screen were
either positive or negative words. Since the results dictate that individuals respond faster to
moving positive words towards their representations of themselves (aka. Their names) regardless
of whether movement towards their name meant to pull or push the lever, the response times
The final experiment demonstrates the influence of design attribute-value sets on brand
categorization through the use of motorcycles. The question here is to evaluate the effect of
perceptual design changes by using strongly reduced black-and-white drawings of motor bikes in
which only one or two design attribute-values were modified (Kreuzbauer & Malter, 2005).
However, the underlying hypotheses for this experiment are as follows (Kreuzbauer & Malter,
2005):
H1: The more a product contains design attribute-values referring to a target category, the more
H2: Adding an irrelevant attribute-value will not change classification of the product toward
the consumer which are gamers. Keyboards with macros (shortcuts that allow the button to house
more than one function when pressed), mechanical keys, and custom software are all recognized
value, like not containing information about how a product should be used or the possible uses, it
will not change the status quo of the product. It will still be regarded as a gaming keyboard.
The way the experiment demonstrated this was that they got 43 participants who were
part time students in courses in computer science and business management at a large university
in central Europe (Kreuzbauer & Malter, 2005). These students were also highly knowledgeable
about motorbikes at the time the experiment began (Kreuzbauer & Malter, 2005). The
experiment then employed each participant with a printed sheet with four simplified black-and-
white images of motorbikes (Kreuzbauer & Malter, 2005). Each model differed through the
second and third attributes of the motorbike design frame, these attributes being tire tread and
wheel-fender distance (Kreuzbauer & Malter, 2005). A control variable was implemented into
one of the designs, but as stated in the hypotheses, irrelevant attribute-values should not affect
the categorization of the motor bike at all. It was then that the subjects were then instructed to
evaluate whether each of the four motorbikes should be classified as an off-road or street bike.
The point system being utilized has a minimum of 1 which indicates off-road motorbike and
maximum of 7 which indicates street motorbike. It should also be noted that the assigning of
these points should be based on the very first impressions the designs make when viewing them
than actually consciously assessing each bikes attributes (Kreuzbauer & Malter, 2005). Each
model was labeled as A-D with C containing all the attributes that make it an off-road bike and D
containing two attributes associated with a street bike. Model A had only 1 attribute associated
with a street bike and B had the same attributes as C but the difference being that the color
scheme was manipulated. Results showed that both Model B and C were highly regarded as off-
road motorbikes. Model A was perceived to be more of a street bike with Model D being
perceived as the most typical street bike among the four models (Kreuzbauer & Malter, 2005).
This proves that attribute-values do dictate whether product is a member of the category it's
referring too. It also proves that irrelevant attributes have no influence on the categorization of
This experiments mainly focused on the neural dynamics of perceptual input. As the
experiment demonstrated, we wanted to see whether or not attributes that are placed on a product
affect our perceptual indications of what the product is and where it can be categorized too. This
was measured on how much the models represented a certain category, that being off-road and
street bike. Off-road bikes usually have a more rugged wheel tread which is used to grasp the
floor when dirt, mud, or sand is involved. The fenders for these bikes also tend to be farther
away from the wheel compared to street bikes. Street bikes normally have a small distance
between the wheel and the fender and generally have a smooth wheel tread. As the results
showed, the perceptual input we acquire from just the attributes alone, was sufficient enough for
us to indicate high content validity of the experimental design. This shows that consumer
perception of key attribute-value sets from abstract sketches of the motorbike models matched
the judgements of product designs and other experts. Also the neural dynamics of our perception
had to interact with the stimuli which were the abstract models. If the models were fully detailed,
wed recognize where they belong instantly, but by showing the interaction of key attributes and
our perceptual input, we were able to observe that attributes are the key components that dictate
what the product refers too regardless of color scheme or any other irrelevant attribute.
References
Kreuzbauer, R., & Malter, A. J. (2005, February 24). Embodied Cognition and New
Product Design: Changing Product Form to Influence Brand Categorization*.Journal of Product
Innovation Management, 22(2), 165-176. doi:10.1111/j.0737-6782.2005.00112.x
Bassett, D. S., & Gazzaniga, M. S. (2011). Understanding complexity in the human brain.
Trends in cognitive sciences, 15(5), 200-209.
Wilson, M. (2002). Six views of embodied cognition. Psychonomic bulletin & review,
9(4), 625-636.
Griffiths, T. L., Lieder, F., & Goodman, N. D. (2015). Rational use of cognitive
resources: Levels of analysis between the computational and the algorithmic. Topics in cognitive
science, 7(2), 217-229.