You are on page 1of 15

Working with Gordon Pask (1967-1978): Developing and Applying

Conversation Theory1
Bernard Scott

Scott, B. (2008). “Working with Gordon (1967-1978): developing and applying conversation
theory”, slightly abridged version in Handbook of Conversation Design for Instructional Practice,
R. Luppicini (ed.), Idea Group, New York.

Abstract

Conversation Theory was conceived and developed by Gordon Pask in the days when he was
Research Director of System Research Ltd, a non-profit research organisation founded in the 1950’s. I
worked with Gordon at System Research Ltd between 1968 and 1978. Here, I give a personalised
account of the development of Conversation Theory and its applications as I observed and
participated in them during those years. Conversation Theory is reflexive: it explains the observer to
himself. Being party to its development was a journey of self-discovery and self-invention in which
Gordon was my guide and mentor. I have ordered my account chronologically; I have also tried to
show how Conversation Theory evolved in a “boot-strapping” manner as a tool and as an
explanation of its own significance. Gordon, par excellence, knew how to foster creative conversation
and “moments of excellence”.

Key Words: Cybernetics, Conversation Theory, Concept, Memory, Psychological Individual,


Mechanical Individual, Conversational Domain, Entailment Mesh, Entailment Structure, Task
Structure, Styles and Strategies of Leaning, Holist, Serialist, Operation Learning, Comprehension
Learning, Learning to Learn.

1 Introduction

In a conversation with Gordon in 1992, I proffered the suggestion, somewhat whimsically, that
cybernetics, in its many guises, could well be described as the art and science of fostering good will.
Gordon accepted the suggestion in all seriousness, reflecting back to me the understanding that I had
unwittingly said something of moment. I have been in conversation with Gordon since 1967, when, as
an undergraduate in Psychology, I was no only introduced to the idea of cybernetics but was also
given the chance of working with one of the few people in the UK who was bold enough to describe
himself as a cybernetician first and as a psychologist, computer scientist, biologist or mathematician
second. I refer to Gordon who, at that stage, had already established an international reputation as an
innovator and visionary and had established and maintained an independent multidisciplinary research
group (System Research Ltd, Richmond, Surrey – a non-profit organisation) for more than a decade.

I spent six months in Gordon’s laboratory, working as the lowliest research assistant, thoroughly
ashamed of my ignorance but eager to learn. I quickly began to appreciate that not only was Gordon a
brilliant psychologist, carrying out original studies of human learning sustained by remarkably precise
and perspicuous theorising, but he was also a polymath, at home in many fields, and, more than that,
he had an unashamed commitment to cybernetics as a unifying discipline, regarding its conception as
the greatest intellectual achievement of the 20 th Century.

1
This chapter is a modified version of a paper originally published as part of a festschrift in Gordon’s honour
(Scott, 1993). I thank the editor of that festschrift, Ranulph Glanville, for permission to adapt and republish the
much of the text of the original paper.

1
I fell I love with cybernetics. Taking one of Gordon’s then most recent papers as a starting point
(Pask, 1966) I read avidly and widely. I became a serious and dedicated student of cybernetics. As I
was doing this, I was also carrying out my duties in Gordon’s laboratory; recruiting subjects for
experiments, acting as experimenter and analysing data. The significance of what I was being asked to
do came to me bit by bit.

We were studying many things: skill acquisition, man-machine interaction, styles and strategies of
learning, small group interaction. Gordon had developed a model of learning in terms of the
“symbolic evolution of concepts”, which embodied an understanding of how organisationally closed
systems that interact one with another inevitably create symbolic domains of interaction that engender
self and other consciousness. He had a clear understanding of how biological systems adapt and
evolve to become a medium for mental life.

The model of learning gave rise, naturally, to a theory of teaching: having set some forms of goals or
criteria, teaching becomes a control process, in Gordon’s words, “teaching is the control of
learning”(Pask, 1968).

One development of Gordon’s thought was the design and specification of mechanised systems that,
as adaptive controllers, supported the effective acquisition of skilled behaviours. With his co-worker,
Brian Lewis, Gordon designed many such systems, carried out extensive empirical studies and
provided models and theoretical frameworks to guide good practice that were (and still are) definitive.
When I arrived on the scene, projects were in progress on tracking skills, keyboard skills, generalised
signal transformation skills and group learning and decision making. Seminal and pioneering work
was being carried out using computer programmes to model learning as an evolutionary process.

I graduated in psychology in 1968 and, at Gordon’s invitation, returned to work at System Research
Ltd, where I remained – apart from a year’s secondment to the Open University to work with Brian
Lewis – for ten years. In 1978, I left the research field and went off to become a practitioner, an
educational psychologist. In that context, Conversation Theory served as an exceptionally useful
guiding framework (Scott, 1987).

There is not space to catalogue all that was done in those days at System Research Ltd. In two papers
written some years ago (Scott, 1980, 1982), I gave an overview with appropriate references. What I
will do here is describe some of the key studies in which I was involved to show how I saw
Conversation Theory come into being and take the form that it did. I will also offer some thoughts and
comments on the significance of the work that we did, as I see it now, after more than twenty five
years.

2 Playing with adaptive systems

Many of Pask and Lewis’s studies of adaptive teaching used a generalised, signal transformation task
in which, as well as acquiring a high level of performance in a perceptual-motor skill (pressing
buttons, with a specified time interval, in response to a signal presented as a visually displayed pattern
of lights) subjects had to learn the code or rule that related lights to buttons. In complex forms of the
task, there was more than one such code or rule. An additional signal acted as the cue to show which
rule was operative at any instant (Pask and Lewis, 1968).

Among several feedback loops that sampled performance in order to modify the type and presentation
rates of stimuli, the adaptive controller took note of how well the subject was performing with respect
to a particular code or rule. This information was used to determine the relative frequencies at which
particular codes or rules were in operation, according to the general principle that subjects should be
given more practice with the codes they were having difficulty with rather than with those for which
they demonstrated a degree of mastery.

2
Pask and Lewis noted that several subjects made deliberate errors in order to over-ride the adaptive
controller. They “played” with the system and found ways of imposing their own learning strategy in
order to practice those components of the skill that they wanted to practice rather than those
components selected by the adaptive controller by its built-in teaching strategy. Gordon’s insight was
to recognise that this play and competition between man and machine had the logical form of a
conversation: as well as an exchange of questions and answers, there was a debate about what sort of
question was relevant at any given moment.

The next step was to design systems that legitimised this debate, in which subjects could explicitly
state their plans and preferences, consult the machine about its recommendations and pursue the
strategy of their choice. There was a caveat: a subject’s choice was only allowed to stand if, over a
trial period, he or she could demonstrate that the choice was a sensible and effective one.

I was the experimenter for a lengthy series of studies using “conversational” adaptive systems (Pask
and Scott, 1971; Scott, 1970). The major finding was that these systems were even more effective as
teaching systems than either standard adaptive systems or control conditions in which experimental
subjects were free completely free in their choice of learning strategy. Amongst a number of
differences between subjects’ behaviour, we noted attempts to adopt one of two distinct strategies: a
“stringing” strategy, where subjects pressed buttons one at a time, rather like playing an arpeggio on a
piano, and a “clumping” strategy, where subjects pressed buttons simultaneously, rather like playing a
chord. Subjects did not always succeed with the strategy of their choice and often had to be persuaded
to try an alternative.

In order to communicate with subjects about strategy types, it was necessary to share with them a
description of the structure of the task and to provide an interface which allowed them to indicate their
choices in terms of that shared description. As we shall see below, this was the arrangement employed
in CASTE.

To sum up, what we had was an efficacious conversational teaching system, in which, as well as the
setting and solving of problems, there was also discourse about what class of problem should be
posed, contractual agreements about how to resolve disagreements, a “modelling facility” in which
problems were posed and solutions demonstrated and an interface, embodying a description of the
task, via which communication took place and from which the experimenter/observer could make a
record of transactions. Analysis of those records provided clear evidence for the existence of distinct
learning strategies and a tendency for subjects to persevere with the strategy of their choice, even
when it was proving to be ineffective.

Given these successes an interesting findings with respect to a perceptual-motor skill (albeit, a
complex one), our next step was to see to what extent subjects might exhibit similarly distinct
strategies when learning a body of intellectual subject matter.

Zoologists on Mars

Our first requirement was a suitable task, one rich enough to provide for a range of possible learning
strategies. At this stage, we were not sure of what we were looking for or what we might find. The
literature suggested we might find differences between visualisers and verbalisers, good and poor
pattern recognisers and, following a suggested of George Kelly’s, subjects who accumulated
knowledge fragmentarily and those who constructed and tried out more global hypotheses. We wanted
a task richer than that used by Bruner, Goodnow and Austin (1956) in their classic studies of “concept
attainment”, where, unlike the Bruner et al task, there was not just one logically optimum strategy.

For some time, we played around with maps of cities, maps of underground railway systems and
associated demographic information but felt the material was too strongly biased towards the visual
modality. Eventually, Gordon came up with the suggestion that we look at taxonomy learning and I
spent a little while learning how to distinguish between different kinds of mosquito. The attractive

3
feature of taxonomies is that they have an overall structure which can be pictured or imaged and they
also have a body of discrete rules that can be learned step by step or organised into categories. We
realised we could design a learning task in which, by not displaying the overall structure at the outset,
we could observe the different ways in which subjects might set about inferring and constructing an
image of that structure as well as observing how they chose to learn the body of rules.

Real taxonomies in biology are messy and complex. Rather than try to adapt the mosquito or some
other creature to our purposes, I set about inventing one for a Martial animal, the “Clobbit”, and
conceived a task in which subjects were asked to play the role of zoologists being sent to Mars as part
of a scientific expedition and were required to make themselves familiar with what was currently
known about Martian fauna. Subjects were provided with a loosely organised library of information
about the several subspecies in the Clobbit family. The “library” took the form of a set of index cards
grouped together under given main headings: behaviour of subspecies, appearance of subspecies,
rules for distinguishing subspecies, descriptions of the form of the taxonomic tree and explanations of
the meaning of the code names assigned to subspecies. Subjects were free to browse through and
access this information in any order they chose, with the requirement that they would eventually learn
the rules that distinguished one subspecies from another.

The full experimental design that Gordon and I eventually conceived called for a control task, in
which, rather than learn from a library of information, subjects were taken through programmed
learning packages, deliberately designed to support a particular learning strategy. For this task, I
invented a second Martian creature, the “Gandlemuller”.

Detailed descriptions of the task materials, experimental design, procedures and results have been
published elsewhere (Pask and Scott, 1972). Here, I can only give a brief summary and comment on
the significance of the work as I see it thirty years on. Before doing that, I would like to say a few
words about how Gordon and I worked together.

Gordon was a master of the creative conversation and was particularly helpful when I got stuck or
bogged down. Without revealing his own preoccupations and understandings, Gordon would involve
me in a dialogue in which I was invited to proffer suggestions and solutions. First, I would be asked to
teach back to him what I saw as the logical and practical demands of the project at hand. Often, this
was enough for me to see the way forward. On other occasions, Gordon rescued me from my
brooding and diffidence by helping me see that the descriptions I was giving of why something would
not work contained the seeds of a possible solution. There were other (and many) occasions when
Gordon would, after a night’s reflection, return, not only with suggested solutions for the particular
problem at hand, but also with a larger and more inspired vision of what we were about. Even when I
did come up with something workable, Gordon had a happy knack of seeing the generality behind
particular solutions.

For my part, I learned to unashamedly confess my lack of understanding. Gordon was always
prepared to slow down and explain the ideas behind his formidable technical language and I often
went off armed with a set of references to engage in more serious study. Although I had by then
completed my undergraduate studies, I was still learning how to learn and, with Gordon as an
example, I acquired the confidence to explore other disciplines, to look for similarities and differences
and to use cybernetics as a unifying framework. As Conversation Theory unfolded, it increasingly
became a theory that made itself. As Gordon once described it, it is, amongst other things, a “theory of
theory building”. I am making these remarks as a way of giving substance to the descriptions of
learning processes that evolved in our studies of how subjects learn complex bodies of academic
subject matter. Our models and explanations of student learning themselves served (and still serve) as
a rich source of ideas about how to learn effectively. They are a set of cognitive methodologies that
reveal their power and usefulness once students are encouraged to become aware of and reflect on the
processes of learning. Unfortunately, it is still the case that most undergraduate and adult learners are
poor at this sort of reflection: the internal conversation that is self-teaching (cf. Harri-Augstein and
Thomas, 1991). Our work was often misunderstood in that we were seen as giving an account of

4
relatively fixed individual differences (we were, for a while, encouraged by our sponsors to develop
normative tests to reveal such differences). Whilst individuals may have particular strengths and
weaknesses with respect to cognitive functioning or, as Gordon would say, they may have processor
limitations, the cognitive organisations and processes that constitute a learning strategy can be
explained, demonstrated and taught. An individual may have a natural or preferred style of learning
but he or she can learn to become more versatile and more effective. Learning to learn is an open-
ended activity.

Our would-be Martian zoologists (graduate or undergraduate students from a range of specialist
disciplines) revealed a complex array of individual differences. All but a few succeeded at the task set.
The few in question appeared to have little idea about how to function as autonomous learners and
floundered hopelessly in the sea of information available. The others, fairly rapidly, adopted an
approach which they sustained consistently until the task was completed. Preferred strategies did vary
and, employing a number of parameters, it was possible to classify those strategies into measurably
distinct types. The most obvious and clear-cut distinction was between subjects who employed some
form of global or holist strategy and those who employed a step-by-step or serialist strategy. Holists
typically sought to identify the overall structure of the taxonomy before committing themselves to
learn specific rules. When they did learn rules, they did so globally, holding several in mind,
comparing and contrasting and formulating and testing complex hypotheses. Serialists typically
identified and learned specific rules as they went along. Their understanding of the structure of the
taxonomy was built up step by step. Some appeared to understand that they were dealing with an
inverted tree structure and, having mastered a rule at a particular branch point, would identify a
related rule and master that. Some were particularly systematic, exhaustively working down one
branch of the tree before working down another. Others were less systematic and moved more
randomly from subspecies to subspecies as if the data accessed were relatively isolated pieces of a
jigsaw but they, too, insisted on mastering a particular rule before proceeding.

The control task was used to test hypotheses about the stability of a subject’s preferred style of
learning. Two teaching programmes were designed: one suited to a holist style of learning, the other
designed to suit a serialist style. From our original set of subjects, sixteen were chosen: eight readily
classed as holists on the Clobbits task and eight readily classified as serialist.

The sample sizes may seem small but it should be borne in mind that the hypotheses we were testing
called for clear cut differences at statistically very high levels of significance. As an illustration, on
some of our measures we were looking for no overlap between the two groups. The probability of
such a result happening by chance is one in a thousand. The psychological literature is replete with
studies where the probability level for rejecting the “null hypothesis” that there is no significant
difference between two sets of data is set at one in twenty. Not only that, it should be borne in mind
that the tasks set were exceedingly lengthy: a subject’s behaviour was observed over many sessions.
For some individuals, this was more than twenty hours. I know of no other studies then or since then
that have carried out such detailed observations of human learning. We also carried out many follow-
up studies in order to replicate our original findings, to test additional hypotheses and to see if our
results generalised to other intellectual domains than that of taxonomy learning. Some of our later
studies employed CASTE (see below) where interactions between a subject and a body of knowledge
were monitored and recorded mechanically. A portable version of CASTE, referred to as INTUITION
in the literature, was taken into schools and colleges. Other laboratory based studies used libraries of
information presented on sets of index cards or structured into teaching programmes, as in the original
Clobbits/Gandlemullers study. Several hundred subjects were recruited to take part in these studies.
Again, I know of nothing in the literature comparable to this body of data. I find it particularly galling
and meaningless when superficial attempts are made to relate our in-depth, lengthy studies to data
gathered from relatively simple text reading tasks or data obtained from self-report inventories of
students’ study habits. For example, the distinction between surface and deep learning is based on a
fairly simple model of human learning. To identify some students as “surface” or “deep” learners,
reveals the extent to which so many students are ill-equipped, cognitively and motivationally, to act a
autonomous learners. This may be useful in a social survey but psychologically, the distinction is only

5
as profound as the model of learning that underlies it and the accompanying empirical evidence that
supports it.

The purpose of our control task was to examine the consequences of matching or mismatching
students, identified as having a particular preferred learning style, with teaching programmes that
embodied either a holist or serialist learning strategy. To cut a long story short, we demonstrated
unequivocally that it is possible to design programmes that work very well for some subjects and
work very poorly for others. We also, more or less incidentally, established the validity of the
principle that having subjects teach back what they have learned in the form of a coherent “story”
leads to long term retention. Other plausible forms of assessment, such as multiple choice questions,
do not do so.

3 CASTE

The acronym, CASTE, stands for Course Assembly System and Tutorial Environment. Following our
studies of how to train zoologists en route to Mars, we secured funding for the task of showing how
best to teach probability theory and statistics to social science students, a notoriously difficult task.
The work and thought we put into this lead both to the creation of CASTE as a prototypical system to
support conversational learning and also to the first full statement of Conversation Theory as a
reflexive metatheory of theory building – and as a unifying paradigm for psychology, in particular,
Conversation Theory can serve a useful unifying framework for a wide range of different learning
theories and theories of creativity (behaviourist, cognitivist, constructivist, psychodynamic). Gordon
liked to theorise; he also liked to embody his theorising in artefacts. CASTE was an embodiment of
Conversation Theory, interpreted as a theory of learning and teaching.

The motivation for the initial studies came from our work showing students have preferred learning
styles and that there can be mismatches between a particular teaching strategy and a student’s learning
style. The main hypothesis we entertained was that many social science students prefer to learn
holistically. They find probability theory and statistics hard to learn because text books and lectures
generally embody a serialist teaching strategy. We wanted to find ways of teaching the subject matter
in question which took account of individual differences and at the same time were guaranteed to lead
to effective learning.

To accommodate the needs of holists we thought it of primary importance to provide a map of the
subject matter and, guided by that map, to permit such students to explore and work on different parts
of the subject matter concurrently. To ensure that effective learning took place, we also instituted a
strict “teachback” routine, whereby, at regular intervals, students were obliged to demonstrate their
understanding of the topics they had been studying. They did this by constructing models, carrying
out experiments and solving problems using a modelling facility.

About six months was spent mapping the subject matter. The time spent was two-fold. On the one
hand, it was necessary to analyse and master the subject matter, by reading texts, comparing them,
distilling out the core of the subject matter, revealing logical and analogical relations between
concepts and agreeing a standard terminology. On the other hand, we sought tools and principles to
guide us in ensuring the body of subject matter was indeed a logically coherent whole. While I
grappled with probability theory and statistics, Gordon set about inventing a general methodology for
knowledge and task analysis and an associated general theory of conversational domains. Around this
time, Dionysius Kallikourdis joined our research team and proved to be an inspiring friend and
colleague in our evolving conversation about conversations.

As a direct analogue of the conversational adaptive systems described above, CASTE, too, was a
system that supported conversational learning. Subjects were provided with a description of the
subject matter in the form of an entailment structure, which showed how discrete topics were related
one to another, logically and analogically. For each topic, there was an associated task structure,
which defined its content as a set of operations that could be carried out using the modelling facility.

6
When subjects chose to work on a particular topic, a set of lesson materials, based on these task
structures, was made available to them. The lesson material contained explanatory, expository text
(conceptual knowledge) and also contained guidance in how to use the modelling facility (procedural
knowledge). Subjects’ choices were monitored and recorded via an electro-mechanical interface,
supported by a suite of computer programs that embodied the CASTE tutorial heuristics. These
heuristics were designed to ensure that learning was effective. Essentially, they took account of a
subject’s current level of understanding of the subject matter in order to decide which subset of topics
could be worked on at any given instant, guided by the rule that, before working on a particular topic,
a subject had to have demonstrated his or her understanding of any topics deemed to be pre-requisites,
as depicted on the entailment structure display. Since the entailment structure included analogy
relations between topics, as well as a partially ordered set of logical entailment relations, there were
frequently several different learning routes whereby a subject could come to know a particular topic.
If a subject, when requested, failed to demonstrate understanding of a particular topic, he or she was
guided back through the subject matter until the current level of understanding was established.

As well as choices about what to work on, other transactions were permitted and monitored. Subjects
were free to explore topics at any time. The explore transaction allowed students to browse through
the library of lessonl materials without the formal obligations of working through them using the
modelling facility and demonstrating understanding by carrying out experiments, constructing models
and solving problems. The aim for transaction was mandatory. Before working on topics, a subject
had to choose a topic to aim for. This could be well in advance of the current level of understanding.
Topics to be worked on were then chosen from the subsets of topics that were pre-requisites of the
aimed-for topic. I have suppressed much detail in this description of CASTE. Similarly, I can only
give a brief summary of the findings from the CASTE studies. There are descriptions in Scott (2001),
that are more detailed.

Subjects typically spent six to eight hours mastering the subject matter when guided by the CASTE
tutorial heuristics. On follow-up studies they showed excellent long-term retention. For some groups
of subjects, the tutorial heuristics were modified or suppressed altogether in order to give subjects the
freedom to teach themselves, for example, to work on topics as they felt fit. As a rule, subjects
learning in these conditions failed to master the whole subject matter and retention was relatively
poorer, even though some of them spent as many as twelve hours “teaching themselves”.

In the conversational learning condition, where the full CASTE tutorial heuristics were operative,
distinct and coherent learning strategies were observed. Some subjects adopted a serialist strategy.
Typically, they aimed for and worked on just one topic at once. Others adopted a holist strategy, where
several topics were worked on, subordinate to a particular “aimed-for” topic. Holists did far more
“exploring” and, as evidenced by post-session interviews, set themselves long-term plans that
included taking account of analogy relations in order to avoid having to systematically work on all the
topics. Part of the experimental procedure included the regular sampling of subject’s uncertainties
about strategic choices and the expected content of topics chosen to be worked on, using a device
called BOSS (Belief and Opinion Sampling System), which automatically normalised the subjective
probabilities assigned to a set of alternatives. Typically, subjects working serially had relatively high
uncertainties about what they would work on next but had relatively low uncertainties about what they
expected to find as the tutorial content of the topic they had chosen to work on. In contrast, holists
were generally quite certain about their strategic plans but had relatively high uncertainties about
specific tutorial content.

To sum up, serialist subjects advanced on a narrow front, relatively uncertain about what and where
they might advance to in subsequent steps and only advancing into areas where they had quite
accurate expectations about lesson content. Holist subjects advanced on a broader front, relatively
certain about what steps they would take in the future, happy to take several steps in parallel and
tolerant of their immediate uncertainties about what they might find in terms of precise lesson content.

7
Looking back at the CASTE studies in the light of current concerns, two things strike me quite
forcibly. One is that the detailed analysis and mapping of the subject matter required to support the
studies still stands as exemplary of its kind. I know of nothing in the course design literature that
compares to our work in taking an extensive body of intellectual subject matter and subjecting it to
such a close scrutiny. The other thing that strikes me is that our studies still stand as the most thorough
investigation of how students learn and can be helped to learn effectively, within what is now referred
to as a hypertext domain.

4 Learning to Learn and Versatility

Even this most summary account of our work would not be complete without mention of one of our
enduring concerns: how to characterise and foster effective autonomous learning. CASTE was an
effective tutorial system for subjects with a range of preferred strategies and learning styles. We also
observed the pathologies of learning that occur when subjects are left to their own devices, without
tutorial support. We also garnered evidence that suggested that subjects who had been exposed to
CASTE tutorial took from that experience some understanding of how to function as effective
autonomous learners. In a series of studies we used diagnostic procedures to reveal learner’s strengths
and weaknesses, we fed back the results of our diagnoses in a supportive and constructive manner, we
ran workshops designed to give learners the opportunity to practice their learning skills and reflect on
the processes that lead to effective learning.

In characteristic fashion, Gordon summarised many of our main ideas as readily assimilable
aphorisms and epithets. This is one of Gordon’s strengths that is often overlooked. Whilst being a
polymath and inventor, with a wide command of many disciplines and associated technical languages
and someone whose own theorising lead him to invent notation schemes and a specialised
terminology, he also had a gift for getting to the heart of the matter with a pithy phrase or definition.
For example, entailment structures and task structures are “permission giving structures”: entailment
structures show “what may be known”, task structures show “what may be done”. Learning is a
process with two complementary aspects: making descriptions of what may be known (Gordon
dubbed this aspect of the learning process comprehension learning) and mastering particular skills
and procedures (Gordon dubbed this aspect of the learning process operation learning). Typical
pathologies of learning are improvidence and globe-trotting. Learners with a serialist bias very often
exhibit improvidence: they master skills and procedures but fail to appreciate relations of analogy
within the larger scheme of things and, as a consequence, are poor at generalising. In effect, they learn
the same things over and over again because they do not recognise and appreciate the underlying
similarities of different knowledge domains. Learners with a holist bias very often exhibit globe-
trotting: they build elaborate descriptions of the relations between knowledge domains but fail to
master content by acquiring particular skills and procedures. A versatile learner is someone who not
only makes global descriptions but also engages in relevant operation learning to ensure the
descriptions have real content. Holists like to have maps and overall justificatory schemata. Serialists
like to have a command of particular operations and procedures. The foregoing statements are
somewhat of a caricature. Effective learning requires that both comprehension learning and operation
learning take place. Comprehension learning may itself be conducted holistically or serially, as may
operation learning. In reality, some learners need to be encouraged to take note of the larger picture;
others need to be encouraged to pay closer attention to operational detail. I invite you, the reader, to
reflect on your own strengths and weaknesses in terms of these distinctions.

5 Conversation Theory

As already noted, Conversation Theory emerged along with the development of CASTE. A full
statement of the theory, supported by descriptions of CASTE and associated empirical studies, first
appeared as a series of papers in the newly founded International Journal for Man-Machine Studies
(Pask, 1972; Pask, Kallikourdis and Scott, 1975; Pask and Scott, 1973; Pask, Scott and Kallikourdis,
1973). Much of this material was eventually written up in book form (Pask, 1975). A second volume

8
(Pask, 1976a) described some of the later studies, which were also reported in the British Journal for
Educational Psychology (Pask, 1976b, 1976c).

In many respects, Conversation Theory was not new. The importance of understanding man-machine
interaction as conversational in form is stressed in many of Gordon’s earlier papers, along with the
notion that man and machine, together, form a self-organising system. However, Conversation Theory,
as first expounded in the early 1970’s had several novel features, which I would like to stress. I cannot
hope to give a complete account of the theory here. The reader should consult the books and papers
already cited for that. What I will do is say something about those features of the theory that made it
(and still make it) paradigmatically radical and revolutionary.

First, a word about motivation is necessary. Gordon was a cybernetician. His theorising was always
global and transdisciplinary. The desire to abstract and generalise is a key feature of cybernetic
thought. That such generalising is possible and fruitful is what attracted Gordon and others of his
generation to cybernetics. The pursuit of transdisciplinary truths, principles and insights was also what
damned cybernetics and cyberneticians in the eyes of the scientific establishment (it still does in many
quarters). As spelled out in his early introduction to the discipline (Pask, 1961), Gordon understood
this issue very well and took, as a matter of principle, that cyberneticians should develop their
generalisations from the secure and respectable position of being accomplished practitioners within a
particular discipline. Gordon’s primary discipline was psychology. Within that, as already described,
he had been responsible for a major body of work on learning and teaching. Theory was embodied in
artefacts and supported by a range of empirical studies. In what follows, I wish to discuss
Conversation Theory in its larger context, as a cybernetic theory of culture, consciousness and social
systems (Pask, 1979; Scott, 1983; Scott, 2001).

Conversation Theory is a cybernetic theory of observers and the communication between them. It is
grounded in cybernetics, in particular, the cybernetics of self-organising systems. As a theory of
observers, it is reflexive: it gives an account of what cybernetics is and what cyberneticians do. To use
von Foerster’s terms, it “explains the observer to himself” (von Foerster, 2002).

A key concept in understanding how Conversation Theory is constructed is the distinction between a
cognitive process and the processor in which the process is executed. Typical processors are one or
more embodied brains or parts of them. In man-machine interaction, processes are distributed between
man and machine. In human communication (conversation), processes are distributed between
persons. In general, processors are embodied brains within a particular environmental niche or setting.
Processors that support the processes that are a “conversation” are dynamic, self-organising systems
(as defined by von Foerster, 1960). This means that learning cannot not occur, that conversation
cannot not take place. Gordon, on many occasions, speculated that, in our cosmos, as well as the
biological systems found in our planet’s biosphere, there are many other systems that are dynamically
self-organising and are candidates as supports for conversational processes, including planets
themselves, stars and galaxies.

In contrast to processors, processes are program-like entities; they are symbolic, they can be described
in a processor-independent manner. This is the key idea that separates Gordon’s theorising from that
of Humberto Maturana. The reader may recall that Maturana, himself a student of von Foerster, has,
in several publications, developed a theory or autopoietic or self-constructing systems. His key insight
is to recognise that many systems that are self-organising in the classic sense of von Foerster (1960)
(that is, they continually evolve beyond the observer’s reference frame) are also self-constructing. He
refers to this general phenomenon as organisational closure (Maturana and Varela, 1980). For
biological systems (Maturana’s starting point), this literally means systems of processes that, amongst
other products, necessarily produce their own embodiments, the processor. He goes on to describe
how all the epiphenomena of consciousness and social life may evolve from the interaction of such
systems. As an aside, it is perhaps worth noting how closely his account of the evolution of mind and
consciousness parallels that of George Herbert Mead (Mead, 1934).

9
Gordon, in contrast to Maturana, distinguishes and characterises a class of processes that are both
symbolic and self-replicating. He distinguishes a psychological, conversational, social autonomy that
is distinct from the biological or mechanical. Do recall that this is a distinction that we, as observers,
are invited to make. However, there is, for the social scientist as distinct from the biologist, real
advantage in making the distinction. This, I believe, is Gordon’s key contribution to the cybernetics of
cultures and societies, although I am sure he would be the first to admit his indebtedness to the work
and thought of the social anthropologists, Gregory Bateson and Anatol Rapaport.

This is not to say that Pask is right and Maturana wrong. Rather it is a difference of emphasis.
Maturana’s is a theory of symbolic system emergence; Gordon’s is a theory of symbolic systems in
interaction. Inevitably, Maturana’s account approaches some notion of psychological individuation,
beyond the biological; inevitably, Gordon’s account has to take account of the biological being that
supports psychological knowing. His recent work has been concerned with just that: an account of the
“interactions of actors” that pays attention to that peculiar moment where observing systems agree
that they are such for each other and agree to converse and, for a time at least, are participants in a
larger, reproducible symbolic system; a conversation. Interestingly, the sociologist, Niklas Luhman
developed a theory of social systems in the 1980s, also based on cybernetic ideas of autopoiesis and
communication (Luhman, 1995). Luhman’s writings show he was aware of Pask’s earlier work on
man-machine interaction as self-organising systems but show know awareness of the development of
Conversation Theory as happened in the 1970s. Luhman distinguishes biological, psychic and social
systems, in contrast to Pask’s fundamental bipartite distinction process and processor. In Scott (2001),
I discuss in more detail the similarities and differences between Luhman’s theory and Conversation
Theory.

Having established the distinction between process and processor, Conversation Theory goes on to
distinguish two types of self-replicating individuals: Mechanical (or M-) Individuals and
Psychological (or P-) Individuals. Typical M-Individuals are biological organisms that are, in
Maturana’s phrase, “organisationally closed and informationally open”. From the perspective of an
external observer, they are taciturn systems (Gordon’s phrase) that adapt and evolve within a
particular niche. P-Individuals are a particular class of self-reproducing and self-referential systems
that, although executed or embodied in M-Individuals, are not necessarily in one-to-one
correspondence with them. They are symbolic, language-oriented systems. To observe them and find
out about them, the observer is necessarily a participant observer, he converses with them.

To my mind, one of the most elegant features of Conversation Theory is the set of definitions that are
employed to characterise a P-Individual. Recall that a P-Individual is a self-reproducing totality. In
order to examine and analyse a P-Individual, it is necessary to make two further distinctions: a
distinction between participants and a distinction between levels in an hierarchy of control or
production. The distinction between participants is necessary to retain a sense of person-hood,
personal knowing and consciousness. In Conversation Theory, consciousness is irreducible. It is
“knowing with another”; as noted, this may be interaction of two participants or perspectives in one
brain or the interaction between participants in a conversation. Put succinctly, P-Individuals can
always be analysed into two or more participants; participants, with rare exceptions, are also P-
Individuals. (One exception is CASTE, which with its tutorial heuristics, although not a P-Individual
in its own right, is a surrogate participant, a support that allows conversation to occur and to be
observed). At the heart of all P-Individuation is the “I-Thou” relationship, which, as Gordon has noted
on many occasions, is the primary analogy (I and Thou are similar but distinct) upon which all
knowing is based. As observers, we construct analogies, share understandings and agree to agree or
disagree about their verisimilitude or usefulness. It is critical to recognise that the interaction between
participants is not mechanical and causal as might be described by a behaviourist observer. It is
provocative (Gordon’s term), arising from an awareness of the other’s awareness. Conversation theory
deploys a protologic or, equisignificantly, a protolanguage, to describe the interactions between
participants, in terms of which shared understandings are characterised as a form of reproductive
process in which, minimally, one participant learns about the other. The transactions permitted in
CASTE, described above, are supports for effective learning precisely because they ensure that

10
understandings occur and that their occurrence can be observed. Another way of putting this is to say
that a cybernetic theory of conversations is, epistemologically, necessarily a second order theory: the
reality shared by participants is what they agree it to be.

The second distinction, between levels of control, is similarly fundamental. It appears in many
disguises: it is the distinction between form and content, between a description and that which it
describes. Its purpose, in the context of P-Individuation, is to permit us, as observers, to invoke and
introduce some notion of mechanical causation. Processes have an effect: there are products, some of
which are, in M-Individuation, the structures (or “fabric”) that embody the processes, some of which
are, in P-Individuation, descriptions of processes.

In Conversation Theory, base level processes are distinguished and called concepts. Concepts are
processes that recall, recognise, bring about or maintain a relation. Higher level processes are called
memories. Memories are processes that recall, recognise, bring about or maintain a concept.
Recursively, higher levels of process may be invoked. Recall, that we have already recognised a P-
Individual as being a self-reproducing totality; as such it is a self-reproducing class (in the sense of a
named collection) of concepts and memories. A succinct way of closing the self-referential loop in
this elegant set of definitions is to recognise that insofar as a concept is memorable, it is self-
reproductive and is, ipso facto, a P-Individual, that is, all concepts are P-Individuals and all P-
Individuals are concepts. At the heart of any P-Individual, there is a description of what he or she or it
thinks he or she is. Von Foerster (2002) in a similar line of argument states that, “An observer is his
own ultimate object.”

The productive and self-reproductive processes that are a P-Individual can be characterised,
dynamically, as a Petri-net, in which initially asynchronous processes become synchronised:
information transfer between participants i(n the form of understandings) coheres them together in a
larger totality, in which, though they remain themselves, they become informed of each other.

Conversation Theory includes an evolving theory of conversational domains: ways of characterising


and describing the knowings and doings of P-Individuals, those knowings and doings that are
permissible if the integrity of participants is to be maintained,. One application of this theory is the
characterisation of “knowable” domains as those in which all named concepts are related together,
unambiguously, as a logically and pragmatically coherent totality, referred to in the theory as an
entailment mesh. The entailment structures employed in the CASTE studies (described above) are
descriptions of what such meshes look like from a particular perspective which has been adopted for
pedagogical purposes. The hierarchical or “pruned” form of an entailment structure hides or ignores
the heterarchical, cyclic organisation that makes a system of concepts productive and self-
reproductive.

Concluding Comments

Conversation Theory is intended to provide a description of our ultimate reality: we are P-Individuals
conversing and a conversation is a P-Individual. As proposed by C G Jung, there is a planetary
conversation, a planetary psyche. Perhaps when we fully understand that we are our ideas, we will be
in a better position to recognise good ideas about how we might or should or could be and will be in a
better position to bring those ideas into effect and propagate them throughout our culture. Any
distinctions we make can be voided as part of a more universal synthesis. Questions such as “what is
the self?” and “what is property?” are not philosophical pseudo-problems. They represent pragmatic
choices: acts of will or intent. In order to do ourselves differently (and better) we need better
understandings of what we are and what we may be.

To end on a personal note: in recent years, I have been exploring what it means to be “in Christ”, both
as an idea and as an experience. Brothers and sisters in cybernetics, it is a blessed place to be.

Future Research Directions

11
In my opinion the coherence, elegance and richness of Conversation Theory remains unsurpassed as a
contribution to cybernetics, the philosophy of science, psychology and education. I have written
elsewhere about these aspects of Conversation Theory. (See references in Additional Reading below.).

There is additional coherence, elegance and richness in the empirical work carried out by Pask and
colleagues as Conversation Theory took form. I have in mind three main areas: (i) studies of how
students learn (ii) the design of effective interactive learning environments and (iii) methodologies for
knowledge elicitation and representation for educational purposes.

With respect to (i) there are no extensive, detailed observational studies of student learning, that I
know of in the literature, comparable to those of Pask and colleagues. The closest we have are the
studies from the ongoing programme of research on students’ information retrieval skills pursued by
Nigel Ford. Ford’s work has been directly influenced by that of Pask and colleagues. (See, e.g., Ford,
2001, 2005).

With respect to (ii), in recent years a number of prototype adaptive systems have been designed to
support learning in hypermedia environments. (See e.g., Brusilovsky et al, 2000.) My reading of the
literature suggests that none of the systems developed thus far are as comprehensive in conception as
CASTE. It also seems, given the few references to CASTE that are to be found in this literature, that
the earlier work has been forgotten and that many wheels are being reinvented. (Arshad and Kelleher
(1993) is a notable exception.) This may not entirely be the case as I suspect that many of the current
generation of researchers are descendents of researchers of previous generations who knew of Pask’s
work and adopted many of his ideas. My observation is that where research is based on technologies,
the emphasis tends to be very much on systems using recent technologies. Older systems become
overlooked and rapidly forgotten.

With respect to (iii), there are now a wealth of methodologies and software tools to support knowledge
elicitation and representation. (See e.g., Johansson et al 1993, Johansson et al 1996.) As argued in a recent
paper (Scott and Cong, 2007), these methodologies of knowledge and task analysis are conceptually and
methodologically confused compared to that derived from Conversation Theory. The latter is a much
more satisfactory approach for knowledge analysis and representation both conceptually and practically.
This is because the Conversation Theory methodology makes a clear distinction between conceptual
and procedural knowledge and contains steps that ensure analyses of the two kinds of knowledge are
carried out in complementary and coordinated ways.

In summary, I believe much still can be gained from revisiting and studying the work of Pask and
colleagues. Many of Pask’s papers are in hard to find conference proceedings. Pask’s books on
Conversation Theory are out of print. Other work is only available in research reports. These latter are
available from the British Library. However, it will require a major scholarly effort to retrieve and
evaluate this work.

References

Arshad, F.N., and Kelleher, G. (1993). SOLA: Students On-Line Advisor, Int. J. Man-Machine
Studies, 38, pp. 281-312.
Bruner, J., Goodnow, J., & Austin, A. (1956). A Study of Thinking. New York: Wiley.

Ford, N. (2001). The increasing relevance of Pask’s work to information seeking and use, Kybernetes,
30, 5/6, pp. 603-629.

Ford, N. (2005). “Conversational” information systems, J of Documentation, 61, 3, pp. 362-384.


Harri-Augstein, S. and Thomas, L.F. (1991). Learning Conversations, Routledge, London.

12
Jonassen, D. H., Beissner, K., & Yacci, M. (1993). Structural knowledge:Techniques for
representing, conveying, and acquiring structural knowledge. Hillsdale, NJ: Erlbaum.
Jonassen, D. H., Tessmer, M., & Hannum, W. H. (1999). Task analysis methods for instructional
design. Mahwah, NJ: Lawrence Erlbaum Associates, Inc.
Luhmann, N. (1995). Social Systems, Stanford University Press, Stanford, CA.
Maturana, H.R. and Varela, F.J. (1980). Autopoiesis and Cognition, D. Reidel, Dordrecht.
Mead, G.H. (1934). Mind, Self and Society, C.W. Morris (ed.), Spartan Books, New York.
Pask, G. (1961), An Approach to Cybernetics, Hutchinson, London.
Pask, G. (1966), A Cybernetic Model for some types of Learning and Mentation. In Cybernetic
Problems in Bionics, Oestreicher, H.C. and Moore, D.R., (Eds.), Gordon and Breach, 1968, 531-585
Pask, G. (1968). Man as a System that Needs to Learn, Stewart, D., (Ed.), Automation Theory and
Learning Systems, London, Academic Press, 1968, 137-208
Pask, G., (1972). A Fresh Look at Cognition and the Individual, Int. J. Man-Machine Studies 4, 211-
216.
Pask, G. (1975), Conversation, Cognition and Learning, Elsevier, Amsterdam and New York.
Pask, G. (1976b), Conversational Techniques in the Study and Practice of Education, Brit. J. Educ.
Psych. 46, 12-25.
Pask, G. (1976c), Styles and Strategies of Learning, Brit. J. Educ. Psych. 46, 128-148.
Pask, G (1976a), Conversation Theory: applications in Education and Epistemology, Elsevier,
Amsterdam and New York.
Pask, G. (1979), A Conversation theoretic Approach to Social Systems. In F Geyer and J van der
Zouwen (eds), Sociocybernetics: An Actor Oriented social Systems theory, pp15-26, Martinus Nijholf,
Social Systems Section, Amsterdam,.
Pask, G., Kallikourdis, D. and Scott B. C. E. (1975), The Representation of Knowables, Int. J. Man-
Machine Studies 17, 15-134.
Pask, G. and Lewis ,B.N. (1968), The Use of a Null-Point Method to Study the Acquisition of Simple
and Complex Transformation Skills, Brit. J. Math. And Stat. Psych. 21, 61-84.
Pask, G. and Scott, B.C.E. (1971), Learning and Teaching Strategies in a Transformation Skill, Brit. J
Math. and Stat, Psychol. 24, 205-229.
Pask, G. and Scott, B.C.E. (1972), Learning Strategies and Individual Competence, Int. J. Man-
Machine Studies 4, 217-253.
Pask, G. and Scott, B.C.E (1973), CASTE: A System for Exhibiting Learning Strategies and
Regulating Uncertainty, Int. J. Man-Machine Studies 5, 17-52.
Pask, G., Scott, B.C.E. and Kallikourdis, D. (1973). A Theory of Conversations and Individuals
(exemplified by the learning process on CASTE). Int. J. Man-Machine Studies 5, 443-566.
Scott, B.C.E.(1970), Cognitive Strategies and Skill Learning. In J Rose (ed), Progress of Cybernetics,
pp 793-802, Gordon and Breach, London.
Scott, B.C.E. (1979), Heinz von Foerster: An Appreciation, Int. Cybernetics Newsletter, 12, 1979,
209-214.
Scott, B.C.E. (1980), The Cybernetics of Gordon Pask, Part 1: Genesis of a theory, Int. Cybernetics
Newsletter 17, 327-326.
Scott, B.C.E. (1982), The Cybernetics of Gordon Pask, Part 2: the Theory of Conversations, Int.
Cybernetics Newsletter 24, 479-491.
Scott, B.C.E. (1983), Morality and the Cybernetics of Moral Development, Int. Cybernetics
Newsletter, 27.
Scott, B.C.E. (1987), Human Systems, Communication and Educational Psychology, Educ. Psych. In
Practice 4, 4-15.
Scott, B. (1993). Working with Gordon: developing and applying Conversation Theory (1968-1978),
Systems Research, 10, 3, pp. 167-182.
Scott, B. (2001). Cybernetics and the social sciences, Systems Research, 18, pp. 411-420.
Scott, B.C.E. and Cong, C. (2007). Knowledge and task analysis for course design. To appear in the
Proceedings International Conference on ICT in Education Crete, July, 2007.
Von Foerster, H. (1960). On self-organising systems and their environments, in Self-Organising
Systems, M. C. Yovits and S. Cameron (eds.), London: Pergamon Press, pp. 30-50.

13
Von Foerster, H. (2002). Understanding Understanding: Essays on Cybernetics and Cognition,
Springer-Verlag, Berlin.

Additional Reading

Entwistle, N. (2001). Styles of learning and approaches to studying in higher education, Kybernetes,
30, 5/6, pp. 593-602.
Pangaro, P. (2001). THOUGHTSTICKER 1986: a personal history of conversation theory in software
and its progenitor, Gordon Pask, Kybernetes, 30, 5/6, pp. 790-806.
Patel, A., Scott, B. and Kinshuk (2001). “Intelligent tutoring: from SAKI to Byzantium”, Kybernetes,
30, 5/6, pp. 807-818.
Rocha, L.M. (2001). Adaptive recommendation and open-ended semiosis, Kybernetes, 30, 5/6, pp. 790-
806.
Ryan, S., Scott, B., Freeman, H. and Patel, D. (2000). The Virtual University: The Internet and
Resource Based Learning, Kogan Page, London.
Scott, B. (1996). Second-order cybernetics as cognitive methodology, Systems Research 13, 3, pp. 393-
406 (contribution to a Festschrift in honour of Heinz von Foerster).
Scott, B (1997). Inadvertent pathologies of communication in human systems, Kybernetes, 26, 6/7, pp.
824-836.
Scott, B. (1999). Knowledge content and narrative structures, in Words on the Web: Language Aspects of
Computer Mediated Communication, L. Pemberton and S. Shurville (eds), Intellect Books, Exeter, pp. 13-
24.
Scott, B (2000). Organisational closure and conceptual coherence, in Closure: Emergent Organizations
and Their Dynamics, Volume 901 of the Annals of the New York Academy of Sciences, J. L. R. Chandler
and G. Van de Vijver (eds.), pp. 301-310.
Scott, B. (2000). Cybernetic explanation and development, Kybernetes, 29, 7/8, pp. 966-994.
Scott, B. (2000). The cybernetics of systems of belief, Kybernetes, 29, 7/8, pp. 995-998.
Scott, B. (2001). Gordon Pask’s Conversation Theory: A domain independent constructivist model of
human knowing, Foundation of Science, 6, pp. 343-360
Scott, B. (2001). Conversation theory: a dialogic, constructivist approach to educational technology,
Cybernetics and Human Knowing, 8, 4, pp. 25-46.
Scott, B. (2002). Cybernetics and the integration of knowledge, invited chapter for Encyclopedia of Life
Support Systems, UNESCO.
Scott, B. (2002). A design for the recursive construction of learning communities, Int. Rev. Sociology, 12,
2, pp. 257-268.
Scott, B. (2004). Second order cybernetics: an historical introduction, Kybernetes, 33, 9/10, pp. 1365-
1378.
Scott, B. and Glanville G. (eds.) (2001). Special double issue of Kybernetes, Gordon Pask,
Remembered and Celebrated, Part I, 30, 5/6.
Scott, B. and Glanville G. (eds.) (2001). Special double issue of Kybernetes, Gordon Pask,
Remembered and Celebrated, Part II, 30, 7/8.
Scott, B. (2006). The sociocybernetics of belief, meaning, truth and power, Kybernetes, 35, 3/4, pp.
308-316.
Scott, B. (2007). The co-emergence of parts and wholes in psychological individuation, Constructivist
Foundations, 2, 2-3, pp. 65-71.
Thomas, L.F. and Harri-Augstein, S. (2001). Conversational science and advanced learning
technologies (ALT): tools for conversational pedagogy, Kybernetes, 30, 7/8, pp. 921-954.
Zeeuw, G. de (2001). Interaction of Actor’s theory, Kybernetes, 30, 5/6, pp. 971-983.
Zimmer, R.S. (2001). Variations on a string bag: using Pask’s principles for practical course design,
Kybernetes, 30, 7/8, pp. 1006-1024.

14
15

You might also like