You are on page 1of 10

Perkins, C., & Murphy, E. (2006).

Identifying and measuring individual engagement in critical thinking in online discussions:


An exploratory case study. Educational Technology & Society, 9 (1), 298-307.

Identifying and measuring individual engagement in critical thinking in


online discussions: An exploratory case study
Cheryl Perkins
Office of Surgical Education, Faculty of Medicine, Memorial University of Newfoundland
Health Sciences Centre, St. John's, NL, Canada A1B 2V6, Canada
Tel: +1 709-777-6874
Fax: +1 709-777-8128
cperkins@mun.ca

Elizabeth Murphy
Faculty of Education, Memorial University, St. John's, Newfoundland, Canada A1B 3X8
Tel: +1 709 737-7634
Fax: +1 709 737-2345
emurphy@mun.ca

ABSTRACT
This paper reports on an exploratory case study involving the development of a model for identifying and
measuring individual engagement in critical thinking in an online asynchronous discussion (OAD). The
model is first developed through a review of the literature on the construct and subsequently applied to the
content analysis of the transcripts of eight student participants in an online discussion. The model, which
included four critical thinking processes, their descriptions and indicators, proved effective for the
identification and measurement of individuals’ critical thinking in OADs. Suggestions for further research
include additional testing of the model using other raters in other OADs.

Keywords
Content analysis, Critical thinking, Online asynchronous discussion, Distance learning

Introduction
In spite of their increasing prominence and potential value, the extent to which online asynchronous discussions
contribute to learning has not yet been clearly determined. As Bullen (1998) concluded, there is “limited
empirical support ... for the claims made about the potential of computer conferencing to facilitate higher level
thinking” (p. 7). Likewise, Garrison, Anderson and Archer (2000) argue that, in spite of the perceived potential
of computer-mediated communication and computer conferencing, their effects on learning and its outcomes
have not yet been well investigated. Not surprisingly, according to Gunawardena, Lowe and Anderson (1997),
the use of computer conferencing and online discussions has “...outstripped the development of theory” creating
a need to determine ways of evaluating the quality of interactions and of learning in such contexts (p. 397).

The contribution that online discussions might make to the promotion of critical thinking skills is one example of
the types of investigations that might be pursued in order to further our understanding of the role of these
technologies in promoting learning. Critical thinking skills are often cited as aims or outcomes of education and
Oliver (2001) argues that critical thinking skills represent an important issue for universities. While engagement
in a cognitive process such as critical thinking in an online asynchronous discussion in a university course may
be a desired outcome from both the instructor’s and students’ perspective, it is unclear exactly how we might
determine if such engagement actually occurs.

Although some aspects of critical thinking in online asynchronous discussions have been investigated (e.g.
Angeli, Valanides & Bonk, 2003; Khine, Yeap, & Lok, 2003), few studies focus specifically on critical thinking,
and few focus on reporting individual engagement. More typically, aggregate measures of group engagement in
general are reported instead. Another problematic aspect of previous studies of content analysis of online
discussion involving critical thinking is that the instruments are often too cumbersome for use by instructors or
students wanting to measure or identify engagement in critical thinking. According to Henri (1992), “...if content
analysis is to become a workable tool for educators who are not trained as scientists, progress must be made at
the conceptual level...and at the technical level...” (p. 134).

The purpose of the study reported on in this paper was to provide a model that might be used by instructors or
students to identify and measure engagement in critical thinking in the context of online discussions. The model
aims to promote construct validity by developing its related processes through a critical analysis and evaluation

ISSN 1436-4522 (online) and 1176-3647 (print). © International Forum of Educational Technology & Society (IFETS). The authors and the forum jointly retain the
298
copyright of the articles. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies
are not made or distributed for profit or commercial advantage and that copies bear the full citation on the first page. Copyrights for components of this work owned by
others than IFETS must be honoured. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, or to redistribute to lists, requires prior
specific permission and/or a fee. Request permissions from the editors at kinshuk@ieee.org.
of the literature on the construct. The model is then applied to the analysis of the transcripts of eight students
participating in an online graduate course. The paper begins with a description of how the model was developed.
The description is followed by an outline of the study’s methodology. The findings are presented in such a way
as to facilitate comparisons between participants in terms of their critical thinking behaviours. Implications for
practice and research are presented.

Development of a model

Identifying the Processes


Table 1 summarizes the main processes identified from the literature, showing similarities and differences in
approaches to defining the construct of critical thinking. Most of these include five steps: elementary
clarification, elementary and advanced/in depth clarification, inference, judgement and strategies or tactics.
Different authors have combined the same basic processes in different ways in order to facilitate analysis. The
following table contains lists of the critical thinking processes identified by selected earlier authors, shown as
steps followed by critical thinkers.

Table 1. Summary of Critical Thinking Models


Authors Norris & Ennis Henri (1992) Garrison, Newman, Webb & Bullen (1997)
(1989) Clulow & Brace- Anderson Cochrane (1995)
Govan (2001) & Archer
(2001)
Step 1 elementary elementary clar- triggering clarification clarification
clarification ification events
Step 2 basic support in-depth exploration in-depth assessing
clarification clarification evidence
Step 3 inference inference provisional inference making and judging
inferences
Step 4 advanced judgement resolution judgement using appropriate
clarification strategies and tactics
Step 5 strategies and strategies — strategy —
tactics formation

Several considerations, such as theoretical compatibility and practicality, must be weighed in making the
selection of the critical thinking processes to be included in a model of critical thinking. For example, the easiest
approach to the selection of critical thinking processes would be to simply choose a list of processes from the
literature. However, this approach may not be feasible if the processes are derived from and organised by a
method aiming to assess groups as opposed to individuals. The Community of Inquiry model (see Archer,
Garrison, Anderson & Rourke, 2001; Garrison, Anderson & Archer, 2001) focus on “critical thinking within a
group dynamic as reflected by the perspective of a community of enquiry” (Garrison, Anderson & Archer, 2001,
p. 11). This focus on the group dynamic is pertinent when the goal is to examine evidence of critical thinking in
the online community as a whole; however, this approach would not be relevant in cases where the focus is on
the individual member of the online community.

The other researchers listed in Table 1 all give comparable lists of processes. They all cite clarification, making
inferences, and strategies, and make some reference to providing and assessing evidence. How exactly these
processes are organized – for example, is ‘clarification’ a single group of processes, or split into two – depends
on the needs of the researchers, who adapt earlier approaches to identifying critical thinking to their present
purposes (see, Bullen, 1997 pp. 93-94).

The list of critical thinking processes adopted for the study’s model were influenced by many researchers
(Bullen, 1997, 1998; Clulow & Brace-Govan, 2001; Garrison, Anderson & Archer, 2001; Henri, 1992; Newman,
Webb & Cochrane, 1995; Norris & Ennis, 1989). These were examined with the aim of creating as short, yet
complete, a list of processes as possible. For example, Garrison, Anderson and Archer (in press) describe a
triggering event which is the initial phase of critical enquiry when “an issue, dilemma or problem that emerges
from experience is identified or recognised”. This triggering event was eliminated, partly because the holistic
approach of that model makes it difficult to apply to individual transcripts from an online asynchronous
discussion structured and limited by the time and subject matter requirements of a course. In addition, to use one
299
example of a triggering event indicator, the equivalent of the ‘sense of puzzlement’ (Garrison, Anderson &
Archer, 2000) in a course transcript, would be the topic suggested by either the instructor or a student. In a model
designed for simplicity and ease of use, the initial question or triggering event can easily be included as part of
clarification, which is described as ‘Observing or studying a problem’ by Henri (1992) or ‘Focussing on a
question’ by Norris and Ennis (1989).

Upon examining the approaches in Table 1, another modification was judged as both reasonable and useful in
developing a model that can be used as a basis for assessing individual’s engagement in critical thinking in an
OAD. This modification consisted of combining elementary and advanced or in-depth clarification into one
category, as they are similar, following the precedent set by Bullen (1997). This approach would lead to a model
consisting of four categories or processes as follows: clarification, assessing evidence, inference, and strategies.
The single clarification category includes Norris and Ennis’ (1989) elementary clarification, and some parts of
their advanced clarification – those parts dealing with defining of terms and identifying assumptions. The
clarification category of this model does not include those parts of Norris and Ennis’ (1989) elementary
clarification dealing with judgements. In this model, judgements are part of the assessment category.
‘Clarification’ in this model is similar to a combination of Henri’s (1992) elementary and in-depth clarification,
and also to Bullen’s (1997) clarification.

Although, in many respects, the model presented in this paper is similar to Bullen’s (1997), there is one major
difference. Bullen (1997) uses ‘strategies’ to refer to thinking strategies, such as using algorithms, models, and
changing focus (looking at the big picture). Other researchers tend to consider ‘strategies’ to refer to taking
action as a result of thinking critically about a problem or issue. Garrison, Anderson and Archer (in press) write
of a “resolution ... by means of direct or vicarious action” (p. 2); Newman, Webb and Cochrane (1995) use
strategy formation, which they describe as “Proposing co-ordinated actions for the application of a solution, or
following through on a choice or decision” (Evaluating critical thinking, para 8). In this study, ‘Strategies’ will
be understood as a plan for action and not a way to analyse the problem.

These differences are a result of the process of drawing on the earlier models to create one that can easily be
used to support the coding of transcripts of an online asynchronous discussion. For simplicity and ease of use,
the number of processes has been reduced to four, and the understanding of the processes ‘strategies’ and
‘judgement’ or ‘assessment’ modified. The four processes begin with clarification which includes everything
involved in proposing, describing and defining the issue. Next is assessment, which covers various types of
judgements, including the use of evidence to support or refute a judgement. The third process is inference, which
covers thinking skills – not only induction and deduction, but generalizing as well. Finally, the fourth process,
strategies, does not refer to tactics such as the use of algorithms or models, but to practical proposals for dealing
with the issue under discussion. Clearly, this model leaves out much of the wealth of knowledge of critical
thinking. However, a model which identified critical thinking processes in more detail would be less suited to the
task of providing a comparatively simple way to assess critical thinking of individuals in an OAD used as part of
a course.

In order to move from a critical thinking model to the analysis of a transcript, further delineation of the model is
necessary. Following the addition of descriptions for each critical thinking process, this delineation may be
provided by lists of indicators associated with the critical thinking processes. Indicators are sometimes referred
to by other names; for example, Norris and Ennis (1989) write about ‘topics’. Whatever term is used, indicators
provide further insight into the different critical thinking processes. They help clarify in the minds of the users of
the model which types of thinking belong in each critical thinking category.

Identifying the Indicators

Critical thinking processes are broad enough that a very long list of indicators could be written to represent each
of them. It is necessary to provide a list of indicators that captures the essence of the particular critical thinking
process in question without being excessively long and complicated to use when applying the model to the
analysis of a transcript of an OAD. To ensure discriminate capability, each indicator should refer to only one
aspect of a critical thinking process, and no two indicators should refer to the same aspect of critical thinking. In
addition, the indicators, taken together, should cover all the aspects of critical thinking processes to avoid
construct under-representation without being so numerous as to make applying the final model too time-
consuming and cumbersome.
The choice of the model’s indicators can be illustrated with the following example. One of the critical thinking
processes is clarification: seeking and expressing understanding of the topic in question. Clearly, clarification

300
includes a wide range of actions. A first step in choosing an indicator is to examine previous work to determine
what approaches have already been used to create an indicator for this aspect of clarification. One of the most
basic aspects of clarification is identifying or stating an issue. Table 2 provides the results of this examination for
this example.

Table 2. An Example of Choosing and Writing Critical Thinking Indicators


Study Indicator
Norris & Ennis (1989) Seek a statement of the thesis or question
Henri (1992) Identifying relevant elements
Garrison, Anderson & Archer (2001) Recognizing the problem
Newman, Webb & Cochrane (1995) Course related problems brought in
Bullen (1997) Focusing on a question a) Identifying or formulating a question

Deciding on the exact wording of an indicator is influenced by the context in which it will be used. For example,
the study reported on in this paper focused on a course in which issues, many of them identified by the students,
are to be discussed. ‘Seek’ (Norris & Ennis, 1989) is too broad in this context. ‘Identifying’ (Henri, 1992) and
‘recognizing’ (Garrison, Anderson & Archer, 2001) are not appropriate for an OAD in a course for which
participants are expected to suggest topics for debate. Newman, Webb and Cochrane’s (1995) wording is close,
but they add other indicators to cover the possibility that the topic of discussion or problem may arise outside the
course. Bullen’s (1997) version is too lengthy to be practical. After examining the relevant indicators from the
literature, the term ‘proposes’ was chosen as the appropriate verb. It includes both the idea of identifying or
seeking a topic, as used by other researchers, but also includes the idea that the topic is to be presented to a group
for discussion. This makes ‘proposes’ a suitable choice for a model intended for use with an OAD. ‘Problems’,
as used in some of the examples from the literature, was avoided in favour of ‘issues’ because ‘problems’ might
imply that problem-solving was being identified and measured. The other indicators were added following the
same procedure. Table 3 presents the final model with the indicators added to each of the four categories. The
model also includes a description of each process.

Table 3. Model for identifying engagement in critical thinking


CLARIFICATION
All aspects of stating, clarifying, describing (but not explaining) or defining the issue being discussed.
Proposes an issue Analyses, Identifies one or more Identifies relationships Defines or
for debate. negotiates or underlying among the statements criticizes the
discusses the assumptions in a or assumptions. definition of
meaning of the statement in the relevant terms.
issue. discussion.
ASSESSMENT
Evaluating some aspect of the debate; making judgments on a situation, proposing evidence for an argument or
for links with other issues.
Provides or asks Provides or asks Specifies assessment Makes a value Gives evidence
for reasons that for reasons that criteria, such as the judgment on the for choice of
proffered proffered evidence credibility of the assessment criteria or a assessment
evidence is valid. is relevant. source. situation or topic. criteria.
INFERENCE
Showing connections among ideas; drawing appropriate conclusions by deduction or induction, generalizing,
explaining (but not describing), and hypothesizing.
Makes appropriate Makes appropriate Arrives at a Makes generalizations Deduces
deductions. inferences. conclusion. relationships
among ideas.
STRATEGIES
Proposing, discussing, or evaluating possible actions.
Takes action. Describes possible Evaluates possible Predicts outcomes of
actions. actions. proposed actions.

Methodology

301
The OAD transcripts used in this study were obtained from two different sections of a web-based, graduate
course in Education during the Fall, 2002 and Spring, 2003 semesters. Information on the course, obtained from
the course web site with permission of the instructor, showed that the two sections of the course were the same.
Twenty percent of the final course grade was assigned to students’ participation in the OAD. The grading rubric
indicates that students were expected to demonstrate many aspects of critical thinking in their posts. For
example, for a score of 18-20, students would be expected to write postings that “...reflect a superior level of
insight, originality, analysis and critical thinking...”. Twelve of the thirty-five students in the two sections of the
course responded to an email request for volunteers to participate in the study. Some participants who responded
to the original email request were not included in the study for various reasons, including no or delayed return of
the signed consent form, and extremely brief or atypical postings. Of the twelve people who had responded to
the original call for volunteers, eight were included in the study.

The application of the model involved reading transcripts, marking passages representing a unit of meaning and
coding each passage. Two approaches to coding were tried during the application: one was to code the units of
meaning by indicator, and then cluster the indicators according to the critical thinking process. The second was
to use the indicators as guides, but code each unit of meaning according to the appropriate critical thinking
process directly. This second procedure proved most effective and was adopted for the application of the model.
In some cases, more than one critical thinking process appeared within a given passage, and the passage was
coded as demonstrating the process that appeared most important in that context. Therefore, only one code was
used for each unit of meaning.

Although all of the transcripts selected for the study were coded, not all text in all transcripts received a code.
Most of the text that was not included was material of a personal or social nature, such as the personal
introductions at the beginning of the course. While important for creating a sense of community among the
online students, these passages were clearly not part of the discussion and analysis of issues which the course
was intended to address, and which were expected to produce examples of critical thinking. Finally, posts
looking for partners for group work or taking care of other such practical details were not included in the
analysis, for the same reasons that the personal introductions were omitted.

Findings
Table 4 presents a summary of individual participants’ engagement in critical thinking which was derived by
coding each of their transcripts using the model presented in an earlier section of this paper. Individual
participants’ engagement is discussed and selected examples for each critical thinking process are presented.

Table 4. Numerical summary of participants’ engagement in critical thinking


Participants
A B C D E F G H Mean
Total # of messages 79 27 27 87 39 49 19 25 47
Total # of coded units 61 34 28 63 53 35 19 27 46
% of units coded as Clarification 41 29 46 51 26 28 37 37 42
% of units coded as Assessment 16 35 40 9 34 57 5 37 33
% of units coded as Inference 20 15 11 16 21 9 48 19 23
% of units coded as Strategies 23 21 3 24 19 6 10 7 16

As is evidenced by the results in the table, the group as a whole tended to engage more in clarification and less in
strategies than in the three other processes. In general, participants tended to engage less in strategy-related
behaviours with a mean percentage of 16% of the units coded for this process. In contrast, participants tended to
engage more in clarification-related behaviours with a mean percentage of 42% of the units coded for this
process. Comparisons between participants highlight individual differences and preferences for engagement in
particular processes related to critical thinking.

At the individual level, beginning with Participant A, we can observe that this individual engaged more
frequently in clarification than in other processes. He also posted the second highest number of messages as
compared to other participants. This individual begins by proposing filtering as an issue for debate or discussion
and by distinguishing it from a similar activity: “Filtering software was seen as a proactive measure, whereas
simple monitoring could only ever be reactive”. He also used critical thinking processes other than clarification.
In the following example, he makes a value judgement or assessment of his co-workers in education, as opposed

302
to those in other professions: “It's one of the things that I like best about the teaching profession - that for the
most part the people I work with are generous and willing to help their colleagues in any way they can.”

Participant A also engages in inference when he notes the similarities and differences between classroom and
online asynchronous discussions with the following conclusion: “Discussion would be different because it would
be less reflective, but more spontaneous.” Compared to others in the group, he exhibits above-average
engagement in strategy-related behaviours. In the following passage, he describes ways to deal with problems
that arise when students are searching online. He also predicts the outcomes of this activity based on his
experience:

Even when every student is doing research on a different topic they have chosen themselves, there is
still a way to limit that type of accident. Before letting them go wild in Google, I get them to write their
search string on a piece of note paper and show it to me. It lets me see what they might see and gives
me a chance to help refine their searching techniques.

Like Participant A, Participant B engaged more in clarification than in inferring. His engagement in this process
was average compared to others while his engagement in clarification was less than average. He demonstrates
assessment by briefly evaluating the results of an attempt to solve a problem by administering a mathematics
skills test in school: “However, preliminary findings have shown little increase in their skills after the MST”. He
follows this assessment of the results by suggesting a strategy: “I am looking towards interactive computer
software that students could use outside of class time”. In the following passage, he shows evidence of inferring
by arriving at a conclusion and hypothesizing and showing the connections between the Universities’ assessment
of constructivism in mathematics instruction, and teachers’ classroom practices, particularly those involving the
use of technology in mathematics instruction:

In a few years’ time when the universities claim that the constructivism approach to mathematics was a
failure … they will be right. Especially if something is not done to encourage math teachers to use
technology so that students can construct their own meaning of the concepts.

Of note in the transcript of Participant C is that the last critical thinking process, strategies, was not one he used
extensively. But he did suggest one strategy as follows:

Why don't we create our own digital repository [?] If anyone in the class has resources that they have
gathered or created, and posted to their own web sites, please send me the URL, and a short description
of the subject areas to which they apply.

Had the individual made more postings, he might have exhibited more examples of engagement in strategies.
This individual only had 28 coded units which was below the mean of 46 and well below the number of coded
units for Participant A. Of the postings made, more units were coded for clarifying and assessing than for other
processes. The following example is illustrative of how individuals can engage in inferring by presenting a
conclusion along with supporting arguments:

… collaboration is necessary because we can all learn from each other … Even with tremendous
dedication of time, and effort, we can only come up with so much on our own, and what we do learn
throughout the solitary process will be influenced by our earliest exposure to topics, as well as our own
limitations of preference, and ability. Collaboration allows us a process to circumvent these limitations.

Coding of the transcript of Participant D showed how her engagement in critical thinking was different from that
of the others in her group. While her transcript produced almost the same number of coded units at Participant “,
her use of the various processes was different from his, and from that of other participants. For example, she had
the highest percentage of units coded as clarification as well as the highest percentage coded for strategies. She
also had the highest number of coded units compared to all others. Interestingly, in terms of assessment and
inferring, she had a below average number of coded units. Compared to the other participants, her engagement in
critical thinking is most similar to that of Participant A. Although she does not frequently engage in assessing,
this one example provides clear indicators of the process: “After researching and completing the trend of
integration of computers in classrooms, my only conclusion is that the majority of barriers could be eliminated
with an influx of money.” In terms of units coded as strategies – in other words, describing, proposing or
evaluating possible outcomes or actions, she wrote an extensive description of ways in which a class web site
can be used to address numerous educational problems such as forgotten texts, homework, or handouts, access to

303
missed work for sick or failing students, and access to an email link for students who are away from class and
need to submit work.

Except for the lower levels of engagement in clarification as compared to the others in the group, Participant E’s
engagement in all other processes is average. He enters into the discussion by engaging in clarification and
proposing an issue for debate as follows: “Is the use of technology in our schools changing the relationship
among teachers, students and parents?” He then goes on to assess and judge this issue by focusing the discussion
on the meaning of terms:

I also wonder what we mean when we say 'value'. In my school, each teacher has a networked
computer on the desk. All teachers use it for attendance, and …. I think we now value computer
technology for these uses. However, if I asked my colleagues how many of them used these computers
with their students, I suspect the answer would be …. Is this also an issue of 'value'?

In the following example, he draws conclusions and states hypotheses: “If it is through this negotiation that we
construct our knowledge, then we need to build negotiation into the process.” The next excerpt from his
transcript provides a clear example of a unit coded as strategy: “I am going to use LOs every chance that I get!
… Some of them just present similar information but in different ways, so they will help me cater to various
learning styles.”

The transcript of Participant F presents the highest percentage of units coded as assessment compared to other
participants. As well, of note in the transcript is a low level of engagement in both inference and strategies with
only 9% and 6% respectively. An example of her engagement in clarification can be found in a message in
which she proposes an issue related to grading online discussions: “One point I wanted to raise about online
discussion is the motivation when marks are involved. … Sure, assigning a grade encourages students to post,
but to what end?” An example of her engagement in assessment is evident in her valuing of one form of
discussion over another: “For me, this mode of discussion and expression of views appears to have many
benefits over classroom discussion.” There were few instances of engagement in inference and strategies;
however, one example might be found when she draws conclusions and makes generalizations about technology
use as follows: “I agree that the way in which technology is being used can be responsible for the loss of certain
basic skills.” In terms of strategies, she proposes a solution to the problem of the lack of teacher training in
technology in the following segment: “…perhaps a web site which combines tutorials with a technology
question forum as you have suggested might be a welcome addition.”

The analysis of Participant G’s transcript revealed very low engagement in assessment compared to the other
participants and the other processes. At the same time, coding also revealed the highest engagement in inference
compared to all other participants. Like others, this participant did not often propose actions or strategies. His
use of clarification is evident in the following passage in which he describes his interest in the issue of access to
computers for the disabled: “As I expect to become involved in designing online learning in the near future, I
would like to learn how to make online content accessible to all individuals, including those with disabilities”.
An example of engagement in assessment is as follows: “Personally I have found sharing to be very easy in my
work, perhaps because I have only had positive sharing experiences.” One of the many examples in his transcript
coded as inference and drawing conclusions is as follows: “Sharing with others also promotes positive relations
in the school community, which can help make the school a more comfortable and ‘easier’ place to be”. An
example of strategies is evident in the following proposal: “To promote interactive discussion, respondents could
be required to make one original criticism (in addition to any original praise).”

Of note in the coding of the transcript of Participant H was the equal emphasis placed on clarification and
assessment and the low engagement in strategies. In terms of clarification, she provides an example of a unit that
might be coded as this process in this excerpt from her transcript: ““s to Internet Plagiarism, I think it's not only
found in Educational field and it should not only point to students.” In terms of assessment, one example of a
unit coded for this process is as follows: “All these were running well via the medium of web, which I think
offer a good opportunity for us to share work and exchange ideas anytime anywhere. I really enjoy this kind of
teaching and learning.” The following example illustrates engagement in inference by showing connections,
generalizing and explaining:

…I think information is more like a carrier for knowledge. … Information could be any piece of mental
or mental-based visual, audial (sic) existence or abstract thoughts. While knowledge would be a
systematic information collection. We collect and digest information and then construct our own
knowledge.

304
Discussion
The purpose of the study was to provide a model of critical thinking that could be used efficiently and easily to
derive and present individual profiles of engagement in critical thinking. The profiles presented above give only
limited insight in terms of individual profiles. However, without the limitations of length of a paper, more
information could be generated on each participant using the model to code their transcripts. The numerical
summary presented at the beginning of the findings highlights similarities and differences between participants.
With a larger group, the comparison might be more revealing in terms of showing patterns of behaviours.
Nonetheless, even with the limited size of the group, comparisons could be made. The brief descriptions
provided for each participant provide insight into how the processes and indicators can be matched with
behaviours evident in the transcripts. The examples also highlight the different ways that the behaviours can
manifest themselves.

The coding using the model made evident differences in behaviour that might be useful in a variety of ways. The
instructor may be planning to engage students in particular behaviours within the realm of critical thinking. For
example, a successful outcome in the context of a particular course might be to be able to propose an issue for
debate, evaluate and make judgements related to the issues, subsequently pose hypotheses, generalizations or
conclusion and finally suggest actions or strategies that might be taken or adopted. Using a model such as the
one proposed in this paper provides a means to direct such behaviour by outlining the indicators and types of
associated behaviour. It also provides a means to verify that individuals actually did engage in all the required
behaviours. Analyses of each participant’s transcript might be completed by the students and he or she required
to submit examples of engagement in these processes in the course of the discussion. In this regard the model
could be provided to students to guide their behaviour formatively and to assess it summatively.

Other useful insights can be derived from these results. There were clear differences in the proportions among
the critical thinking processes for the different students. Since all students were in the same course, and most of
them in the same O”D, the differences in the critical thinking processes in which they engaged may reflect
differences in the processes that the student is comfortable with, or even capable of using. Knowing this, the
instructor may decide to revise the course to encourage a broader range of processes, or provide feedback to
students who appear to be uncomfortable with or unable to engage in a particular critical thinking process. The
model may as well prove useful for designers and instructors interested in structuring or moderating the
discussion. It could be used to focus on developing teaching strategies or to encourage specific types of critical
thinking processes. If, for example, the goal is for the students to engage in inference more often, and
clarification less often, the model can help direct engagement in one or the other.

Conclusions and Further Research


The results of the application of the model showed that it could be used to obtain insight into the critical thinking
processes used by participants in an O”D. This process could be applied to other thinking skills, such as
problem-solving or knowledge construction. The need for such work has been identified in the literature (see
Henri, 1992; Rourke, Anderson, Garrison & Archer, 2001; Zhu, 1996). The greatest challenge in any study using
the approach of this one is undoubtedly creating and selecting indicators, which are essential to promoting
construct validity of the model and its ability to adequately represent the construct in question.

This study was limited to coding by only one rater, therefore no tests of reliability were conducted. Although
there are some indications in the literature that inter-rater reliability is acceptable in rating online discussion
transcripts (see MacKinnon, 2003), further work would be useful in confirming or contradicting this. Future
studies might make use of the model with other raters, in different courses, in other contexts and with more
participants.

Possibly, the proportions of the critical thinking processes observed may be affected by the requirements of the
course as well as the personal variations among the students. This is only one hypothesis that could be tested by
expanding the application of the model into a wider range of OADs from different courses. Such expanded
application could also be used to find evidence as to whether there are subject-specific critical thinking
processes, and if there are, what processes and indicators should be added to the model. Some research in this
area would include measuring uncritical, as well as critical thinking, in order to give a better and more balanced

305
picture of an individual’s thinking. Adding this dimension to the model and testing it would be another avenue
for further research.

Both instructors and students could benefit from using the model developed in this study. Instructors who have
designed their OAD to encourage the use of critical thinking processes can rate their students’ transcripts using
the model in order to assess the success of their efforts to encourage critical thinking. They can also focus on
developing teaching strategies to encourage specific types of critical thinking processes if, for example, they
want the students to use inference more often, and clarification less often. Applying the model to their students’
transcripts will reveal which critical thinking processes are most frequently engaged in. This is information that
instructors need before deciding which specific skills to encourage or before determining how successful their
efforts were in supporting particular skills.

The model could also be used as the basis of a student evaluation tool. It would also be relatively simple to
modify the model into a rubric by assigning marks to each critical thinking process and adjusting the rating
system somewhat. In other words, it would be necessary to rework the model from one intending to provide
feedback on a personal level to one specifically designed to compare and rate students’ performances.

Students could also use this model, in their case, for self-assessment. Self-assessment might be required by the
instructor as part of the course work or course evaluation. Some students may wish to use it for their own
personal benefit, to enhance their understanding of the cognitive processes involved in critical thinking, or to
monitor and enhance their own contributions to an O”D.

In conclusion, this study, although small scale and preliminary in nature, demonstrates the potential usefulness
and importance of identifying critical thinking in online asynchronous discussion groups.

References
Angeli, C., Valanides, N., & Bonk, C. J. (2003). Communication in a web-based conferencing system: the quality of
computer-mediated interactions. British Journal of Educational Technology, 34 (1), 31-43.

Archer, W., Garrison, D. R., Anderson, T., & Rourke, L. (2001). A framework for analysing critical thinking in computer
conferences. Paper presented at EURO-CSCL 2001, Maastricht, retrieved August 16, 2005 from
http://www.ll.unimaas.nl/euro-cscl/Papers/6.doc.

Bullen, M. (1997). A case study of participation and critical thinking in a university-level course delivered by computer
conferencing, Unpublished doctoral dissertation, University of British Columbia, Vancouver, Canada, retrieved October, 25,
2005 from http://www2.cstudies.ubc.ca/~bullen/publications.html.

Bullen, M. (1998). Participation and critical thinking in online university distance education. Journal of Distance Education,
13 (2), 1-32.

Clulow, V., & Brace-Govan, J. (2001). Learning through bulletin board discussion: A preliminary case analysis of the
cognitive dimension. Paper presented at the Moving Online Conference II, September 2-4, 2001, Gold Coast, Australia.

Garrison, D. R., Anderson, T., & Archer, W. (2000). Critical inquiry in a text-based environment: computer conferencing in
higher education. Internet and Higher Education, 11 (2), 1-14.

Garrison, D. R., Anderson, T., & Archer, W. (2001). Critical thinking, cognitive presence, and computer conferencing in
distance education. American Journal of Distance Education, 15 (1) 7-23.

Garrison, D. R., Anderson, T., & Archer, W. (in press). Critical Thinking and Computer Conferencing: A Model and Tool to
Assess Cognitive Presence. American Journal of Distance Education, Retrieved October 25, 2005 from
http://communitiesofinquiry.com/documents/CogPres_Final.pdf.

Gunawardena, C., Lowe, C. A., & Anderson, T. (1997). Analysis of a global online debate and the development of an
interaction analysis model for examining social construction of knowledge in computer conferencing. Journal of Educational
Computing Research, 17 (4), 397-431.

Henri, F. (1992). Computer conferencing and content analysis. In A. R. Kaye (Ed.), Collaborative learning through computer
conferencing: The Najaden papers, Berlin: Springer-Verlag, 115-136.

Khine, M. S., Yeap, L. L., & Lok, A. T. C. (2003). The Quality of message ideas, thinking and interaction in an asynchronous
CMC environment. Educational Media International, 40 (1-2), 115-25.

306
MacKinnon, G. R. (2003). Inter-rater reliability of an electronic discussion coding system, ERIC Document Reproduction
Service No. ED 482 252.

Newman, D. R., Webb, B., & Cochrane, C. (1995). A content analysis method to measure critical thinking in face-to-face and
computer supported group learning. Interpersonal Computing and Technology, 3 (2), 56-77.

Norris, S. P., & Ennis, R. (1989). Evaluating critical thinking. In Swartz, R. J. & Perkins, D. N. (Eds.), The practitioner’s
guide to teaching thinking series, Pacific Grove, CA: Midwest Publications, 1.

Oliver, R. (2001). Exploring the development of critical thinking skills through a web-supported, problem-based learning
environment. In J. Stephenson (Ed.), Teaching and learning online: pedagogies for new technologies, London: Kogan Page,
98-111.

Rourke, L., Anderson, T., Garrison, D. R., & Archer, W. (2001). Methodological issues in the content analysis of computer
conference transcripts. International Journal of Artificial Intelligence in Education, 12, retrieved October 25, 2005 from
http://www.atl.ualberta.ca/cmc/2_et_al_Content_Analysis.pdf.

Zhu, E. (1996). Meaning negotiation, knowledge construction, and mentoring in a distance learning course, ERIC Document
Reproduction Service No. ED 397 849.

307

You might also like