You are on page 1of 46

THE NEED FOR AN E-TAXONOMY

Bobby Elliott Scottish Qualifications Authority

July 2011

Abstract
Online writing, in various forms, is common but there has been little research into ways of analysing and evaluating the quality of such communication. This paper addresses the following research questions: 1. Can a traditional taxonomy be used to analyse the learning that takes place in an online environment? 2. Can a taxonomic approach to the analysis of online writing be used to aid assessment? It uses content analysis techniques to study two forums, used within a post-graduate online learning programme, and reports on the nature of learning in this environment. The techniques used are well known methods of analysing academic discourse; one is commonly used in the online environment (the practical enquiry model) but the other is not (a derivation of Blooms Taxonomy). This paper seeks to assess their suitability to the online domain. The forums selected for analysis differed in one important respect: one was assessed, the other was not. By comparing the results of the analyses, inferences are made about the impact of assessment on the nature of learning in online forums. The paper makes two recommendations relating to the analysis and assessment of online writing. 1. Traditional taxonomies can be used to analyse online writing, but such taxonomies need to be updated. 2. An e-taxonomy should be created for the purpose of analysing online writing and improving assessment practice. The first recommendation relates to the use of Anderson and Krathwohls taxonomy to analyse online writing. This derivation of Blooms Taxonomy was found to be applicable to the online environment, providing useful information about the quality of academic discourse, and aiding the assessment of online writing. However, it requires to be customised to the online domain. The paper argues that an e-taxonomy is a necessary pre-cursor to an epedagogy, which is urgently needed to embrace the new affordances of the contemporary learning environment. An e-taxonomy would serve the same needs that Blooms original taxonomy served in the last century. The paper introduces a participation index (PI) and suggests that it is a useful metric for visualising participation in online discussions.

1: Introduction 1.1 Structure of paper


The paper is divided into four sections: 1. Introduction 2. Research methods 3. Findings 4. Conclusions and recommendations. This introductory section seeks to provide background information on the key themes in this paper, which are: computer mediated communications, online discussion boards, content analysis, and online writing. It concludes by stating the research question, describing the scope of the research, and outlining its potential value.

1.2 Background to online writing


The use of computers in education has a long tradition. Higher Education (HE) has used asynchronous communication tools (such as e-mail and discussion boards) since the late 20th Century. More recently, a range of Web 2.0 tools, such as blogs and wikis, have been added to this toolset. Students enjoy the use of technology in their classes (Clarke, Flaherty & Mottner, 2001) and its use is supported by a wealth of long-established pedagogical research, such as Vygotskys theories on language and construction of knowledge (1987), and Kolbs emphasis on the importance of learning through participation (1984). Subsequent research has highlighted the importance of the social construction of knowledge (see, for example, Pea, 1993). A large body of empirical evidence supports the use of computers in the learning process. A meta-analysis carried out by the Department of Education in the United States (US Department of Education, 2009)1 concluded that: Students who took all or part of the class online performed better, on average, than those taking the same course through traditional face-to-face instruction. The study reported that instruction which combined online learning

Over 1000 research studies were included in this meta-analysis. Most related to the post-school sector; few studies have been carried out for younger learners.

Need for an E-Taxonomy

Page 1

with face-to-face learning was most effective. These findings were true irrespective of the means of implementation [of online learning] and were found to be applicable irrespective of content or learners.2 This study looks at one particular form of online learning, that which involves the use of online discussion boards. Online discussion boards are one instance of what has been variously described as online writing, digital writing, digital learning narratives and Web 2.0 writing, which can take various forms including online forums, blogs, wikis, social networks, instant message logs, and virtual worlds. Many of the issues raised in this paper are relevant to all forms of online writing. This new medium provides new affordances new ways of utilising the medium to communicate and collaborate. For example, online writing may make visible such things as co-operation, collaboration, and self-reflection; the learners thought processes are also more apparent (Knox, 2007). The inclusion of multimedia (such as audio and video) is straightforward. Referencing (hyperlinking) to related resources or information is simple. The asynchronous nature of many online communications makes the time and place of contributions more flexible, and provides more wait time to improve opportunities for reflective writing (Berliner, 1987). The dialogue may have an audience beyond the walls of the university (perhaps at campus, national or global level). The scope of the tasks that may be set can be greater, with faculty setting large-scale projects such as collaborative book writing or large-scale software construction (Gray et al, 2009). Authenticity can be improved by tackling real-world issues, and seeking feedback from peers and experts across the world (Gray et al, 2009). The potential for producing authentic, large-scale, co-constructed, continuously improved, media-rich works of national or international interest is unique. These new affordances have implications for assessment. They provide an opportunity to assess skills that were previously considered difficult or impossible to assess. For example, collaboration has long been recognised as an important skill but is rarely assessed due to inherent difficulties in measuring this competence through traditional means. The new affordances offer opportunities to set new kinds of assessment activities real activities with real value rather than the contrived tasks often used to assess learners knowledge and skills

The report noted that the benefits of online learning may not necessarily relate to the medium but could be the effects of other factors such as time spent learning, the curriculum, or pedagogy.

Need for an E-Taxonomy

Page 2

(Gray et al, 2009). Traditional assessment has been criticised as being artificial or trivial or irrelevant (see, for example, Brown et al, 1997). The online environment offers the prospect of addressing these criticisms, and rethinking current practice. But there are currently no standards for describing online writing. The lack of an online pedagogy, an e-pedagogy, has been noted (Elliott, 2009) but the absence of a vocabulary, or taxonomy, for describing or appraising online activities is even more fundamental. An etaxonomy is necessary to underpin an e-pedagogy. This paper explores if traditional taxonomies can be used as the basis of an e-taxonomy.

1.3 Content analysis methods


Online forums, by their nature, consist of a large volume of words, assembled into threads, which consist of posts and replies. The analysis of written text is normally carried out using a technique called content analysis. Content analysis has been defined as a systematic, replicable technique for compressing many words of text into fewer content categories based on explicit rules of coding (Berelson, 1952; GAO, 1996; Krippendorff, 1980; and Weber, 1990). Holsti (1969) offers a broad definition of content analysis as, any technique for making inferences by objectively and systematically identifying specified characteristics of messages. It is most commonly used in the social sciences to analyse recorded transcripts of interviews. Content analysis enables researchers to sift through large volumes of data in a systematic fashion (GAO, 1996). It is a useful technique for allowing us to discover and describe the focus of individual, group, institutional, or social attention (Weber, 1990). It also allows inferences to be made, which can then be corroborated using other methods of data collection. A large number of traditional content analysis methods were reviewed. All of these methods share a common goal: to find meaning among messy data as Chi (1997) puts it. As such, content analysis is well suited to the study of online discussions. In verbal analysis, one tabulates, counts, and draws relations between the occurrences of different kinds of utterances to reduce the subjectiveness of qualitative coding (Chi, 1997).

Need for an E-Taxonomy

Page 3

Traditional content analysis methods generally take a micro-analysis approach to the scrutiny of text-based discussion. They analyse individual units (often words or sentences) from a statistical perspective. This can be complex and time-consuming, and lead to problems with semantics. Conventional content analysis has little basis in epistemology, seeking to provide a quantitative solution to a qualitative problem (the study of verbal communication). This study takes a different approach. It takes a well-known classification system (a derivation of Blooms Taxonomy) and applies it to the online domain. The system has a strong epistemological basis, specifically rational epistemology, and is widely used and understood in education. The approach is fully described in the next chapter, together with a rationale for using it.

1.4 Research question


The main research area relates to the application of traditional taxonomies to the online environment. The specific research questions are stated below. 1. Can a taxonomy be used to analyse the learning that takes place in an online environment? 2. Can a taxonomic approach to the analysis of online writing be used to aid assessment? Two forums were analysed, using two content analysis techniques, to identify the types of interactions and cognitions that took place in those environments. One forum was assessed and one was not. Once analysed, the forums were compared to identify any apparent differences in the nature of the learning that could be attributed to assessment.

1.5 Scope of study


This small study has a number of limitations. The study examined one form of online writing: online forums. While some of the recommendations are applicable to all types of [asynchronous] online writing, the study itself was limited to one specific form. Only two discussion forums were studied, comprising 33 students. Both forums were selected from one particular post-graduate programme: the Masters Degree in E-Learning from the University of Edinburgh. The associated courses are: 1. 2. An Introduction to Digital Environments for Learning. Type: not assessed. Understanding Learning in an Online Environment. Type: assessed.

Need for an E-Taxonomy

Page 4

It is recognised that the sample is small and the results are not generalisable. The participants are not typical, being post-graduate students on an e-learning course. The study focused exclusively on summative assessment; the use of online forums for formative purposes was not explored. The study sought to explore the applicability of traditional taxonomic approaches to the analysis of online forums. It did not seek to fully define a taxonomy for this purpose, but it does suggest enhancements to an existing taxonomy.

1.6 Significance of the study


Online writing is becoming increasingly common within education, particularly Higher Education. The study of the unique characteristics of online writing is embryonic. Its application, analysis and assessment have yet to establish a significant body of literature. Online forums have been used within Higher Education for many years and they continue to be a popular means of supporting learning. But even in this well established environment, the analysis and assessment of learners contributions is not well understood. A number of methods of analysing the contents of online discussion boards have been proposed, and some of these methods have gained traction and are considered best practice in this field. But these methods take a micro-analytical approach to content analysis. This paper seeks to explore a more holistic, macro-analytical method for measuring the academic quality of online interactions. Of particular interest is the potential for applying established methods of analysing academic discourse to the online environment. Traditional content analysis focuses on the manifest contents of the communication (its literal meaning from a verbal perspective); this analysis uses a more subtle approach to try to elicit the latent meaning of the communication (the learners intended meaning). It is my contention that this approach is more instructive. While the specific recommendations in this paper relate to online discussion boards, many of them could be applied to any form of asynchronous online writing, which is a medium set to grow significantly in the coming years. The analysis and assessment of online writing is an evolving subject, without national or international standards; it is hoped this paper contributes to the development of these standards.

Need for an E-Taxonomy

Page 5

2: Research methodology and data collection methods 2.1 Methodology


Two forums were selected to address these questions. Both forums were part of the University of Edinburghs Masters Degree in E-Learning, which has been offered by the university since 2006.3 The forums were part of the following courses within that programme: Introduction to Digital Environment for Learning (IDEL) Understanding Learning in an Online Environment (ULOE).

These courses were selected because they both required students to use an associated online discussion forum, and one was assessed (ULOE) and one was not (IDEL), permitting them to be compared with respect to the impact of assessment. In total, 33 students undertook these courses between September and December 2009; 21 participated in the IDEL course and 12 in the ULOE course. Each course lasted 12 weeks. Some time, in each course, was dedicated to assessment towards the end of the course; this varied from one non-teaching week in IDEL (week 12) to two non-teaching weeks in ULOE (weeks 11 and 12). The IDEL course is an introductory course, normally, but not exclusively, undertaken by new students to the programme. The ULOE course is generally undertaken by more experienced students. ULOE was assessed using four elements, one of which related to learners participation in the associated course forum. This contributed 10% to the overall course grade. A rubric was used to assess each student, based on that described by Rovai (2000). The other assessed elements were: learning needs analysis (20%), reflective report (20%), and an essay (50%).

2.2 Research design


Two forums were selected: one where student contributions are assessed; and one where student contributions are not assessed. These forums were analysed using two content analysis techniques: one (the critical analysis model) was designed for online discussions, and the other (the taxonomic approach) was designed for the analysis of traditional academic discourse.

The programme was piloted in 2004.

Need for an E-Taxonomy

Page 6

Two research methods were used. These are summarised in Table 1.


Method 1 Method 2 Content analysis of the online forums Student questionnaire to gauge the attitudes of participants

T ABLE 1: R ESEARCH METHODS

These methods are related to the research questions as described in Table 2.


Research question Can a taxonomy be used to analyse the learning that takes place in an online environment? Method Content analysis Content analysis Questionnaire

Can a taxonomic approach to the analysis of online writing be used to aid assessment? T ABLE 2: R ESEARCH METHODS RELATED TO RESEARCH QUESTIONS

Each of the research methods is now described. 2.2.1 Method 1: Content analysis The aim of the content analysis was to gain a clearer picture of the nature and quality of the online discussions. The analysis of the forums took a quantitative and qualitative approach. From a quantitative perspective, various metrics were computed, such as the total number of posts and the mean number of words per post. From a qualitative perspective, the academic quality was measured in terms of two attributes: 1. an analysis of the types of cognitive activities that occurred in the forums 2. an analysis of the critical thinking that took place. The first analysis (categorising the types of cognitive activities taking place) was done using a modified version of Blooms Taxonomy (1956) proposed by Anderson and Krathwohl (2001) as the coding frame. The second analysis (categorising the types of critical thinking) used the practical inquiry model proposed by Garrison, Anderson & Archer (2001). This a priori coding scheme was customised for the online environment by selective additions to capture the full range of communications made possible by the online environment. A full description of each approach is given later in this chapter. Each message, in both forums, was analysed using both of these methods. Need for an E-Taxonomy Page 7

A total of 1380 messages were analysed, distributed across the courses as follows: Introduction to Digital Environments for Learning: 723 messages Understanding Learning in the Online Environment: 657 messages. The structure of each forum is illustrated in Figure 1. Discussion threads that were expressly intended for social messages or support messages were excluded from the analysis, as were all messages from the course tutors. Social messages are: a statement or part of a statement not related to formal content of subject matter (Henri, 1992). It is recognised that social messages play an important role in community-forming and this in turn effects participation, which, in turn, affects learning. But, for the purposes of this research, it was considered such messages were outside the scope of the research goals. For similar reasons, support messages (messages seeking or giving technical assistance) were excluded.4 The messages from tutors were removed from the analysis to ensure that the results illustrated learner contributions, and not the input from faculty. It is recognised that tutors play a significant role in facilitating learning in the online environment but the focus of the study was the analysis of learner contributions, and the inclusion of tutor messages in the analysis would contaminate the results. Once these discussions were removed, the structure of the forums is illustrated in Figure 4. Some analyses required a week by week comparison. For example, it is instructive to see how learner participation varied from week to week, or how the quality of contributions changed over time. The structure of the forums prevented a direct weekly comparison (see Figure 1) due to some weeks being combined (such as weeks 4 and 5 in IDEL). In such circumstances, weekly contributions were estimated by assuming that 50% of the messages were posted each week. Where a forum was not associated with any particular week (for example, the ULOU forum for general discussions), the messages in the forum were evenly distributed across weeks 1 to 11. By adopting this methodology, a week by week comparison of the forums was made possible (see Appendix 3).

Four threads were excluded from the IDEL analysis; two threads were excluded from the ULOE analysis. These threads were explicitly designed for social or support purposes.

Need for an E-Taxonomy

Page 8

IDEL
Week 1

ULOE
General discussion

Week 2

Refererences

Week 3

Week 1,2

Week 4, 5

Week 3, 4

Week 6, 7

Week 5,6

Week 8, 9 Web

Week 7, 8

Week 8, 9 Hypermedia

Week 9, 10

Week 10

Week 11

F IGURE 1: F ORUM STRUCTURES

2.2.1.1 Analysis of cognitive quality Benjamin Bloom developed his Taxonomy of Educational Objectives in 1956 in an attempt to classify the goals of the educational system and discuss curricular issues with greater precision (page 1). The resulting classification system (known as Blooms Taxonomy) has been widely used since then to categorise cognitive competencies, and has generally succeeded in his stated goal of providing a framework for discussing academic outcomes. It remains the most widely used classification system (of academic outcomes) in education.

Need for an E-Taxonomy

Page 9

Bloom proposed six types of cognition, as illustrated in Figure 2. A key aspect of the Taxonomy was its sense of hierarchy. Each level builds on the level below. For example, comprehension requires a knowledge base; application requires both comprehension and factual knowledge; evaluation is at the top of the hierarchy and is the most academically challenging, requiring all of the skills below it.
F IGURE 2: B LOOM ' S T AXONOMY

Evaluation Synthesis Analysis Application Comprehension Knowledge

Blooms Taxonomy has been used for two main purposes: (1) to classify curricular objectives; and (2) to aid the construction of examination papers. Blooms Taxonomy provides a classification system for teachers to use when they construct curricula. Prior to its introduction, teachers used the same words (such as describe and explain) in different ways. His hierarchy placed these words in a specific order, giving each a defined meaning and relative position in a hierarchy of cognitive abilities. The Taxonomy helped teachers to write learning objectives in a more meaningful and consistent manner. It also improved the quality of examination papers. His definitions and hierarchy enabled teachers to write question papers in a more systematic and consistent way, and to construct more valid and reliable rubrics to assess these examinations.5 In spite of its broad adoption within all sectors of education, Blooms Taxonomy has its critics. Some criticisms are straight-forward. Krietzer et al (1994) complain that his hierarchy is wrong; that synthesis is, in fact, more complex than evaluation and should be above it in the hierarchy. Some criticisms are more nuanced. Ornell (1974) points out that the hierarchy is too simplistic; for example, knowledge can be more demanding than comprehension in some contexts.6 In spite of its critics, Blooms Taxonomy has passed the test of longevity, and

Teachers used Blooms Taxonomy to award more marks for higher order questions, which assessed skills that were higher placed in his hierarchy. For example, an analysis question would be awarded more marks than a comprehension question.
5 6

Knowledge of quantum physics may be more intellectually demanding than the application of multiplication tables.

Need for an E-Taxonomy

Page 10

remains the most widely used classification system of learning outcomes in Western education. As Stemler (2001) writes: Its greatest strength is that it has taken the very important topic of thinking, and placed a structure around it that is usable by practitioners. In 2001, Anderson and Krathwohl proposed a revision to Blooms Taxonomy to incorporate new knowledge and thought into the framework [] now that we know more about how children develop and learn (page xxii). Their Taxonomy for Learning, Teaching and Assessment (hereafter referred to as the taxonomy) used Blooms original Taxonomy as one dimension in a two-dimensional depiction of cognitive abilities (see Table 3).
Cognitive dimension Knowledge dimension
1.Remember 2.Understand 3. Apply 4. Analyse 5. Evaluate 6. Create

A. Factual knowledge B. Conceptual knowledge C. Procedural knowledge D. Meta knowledge T ABLE 3: A NDERSON
AND

K RATHWOHL S T AXONOMY FOR LEARNING , TEACHING & ASSESSMENT

The cognitive dimension (the column headings in the table) is essentially Blooms Taxonomy with minor modifications; a new category was added (create) and synthesis skills were absorbed into other categories. But the knowledge dimension (the rows on the table) was new. This was added to reflect the developments in cognitive psychology that have taken place since the original frameworks creation (page 27). Table 4 summarises the components of this dimension (extracted from page 29).
Factual knowledge The basic elements students must know to be acquainted with a discipline The inter-relationships among the basic elements within a larger structure that enable them to function together How to do something, methods of enquiry, and criteria for using skills, techniques and methods

Conceptual knowledge

Procedural knowledge

Metacognitive knowledge

Knowledge of cognition and awareness of ones own cognition

T ABLE 4: K NOWLEDGE DIMENSION OF THE TAXONOMY

Need for an E-Taxonomy

Page 11

The new dimension permits the type of cognition to be categorised more precisely, and addresses one of the central criticisms of the original taxonomy (that it is too simplistic). For example, the new framework differentiates between remembering facts (cell A1) and remembering concepts (cell B1). It removes some of the crude generalisations that are implicit in Blooms scheme, for example that all knowledge is equal. Note that there is a general sense of rising cognition in the framework as you progress from cell A1 (remembering facts) to cell D6 (creating meta-knowledge). This taxonomy has the potential to provide a more granular method of coding learner contributions to the forums than Blooms Taxonomy. As the authors noted, it incorporates a more contemporary view of cognitive development. It is also widely known and used within education and is, therefore, familiar to a wide cross-section of educationists. 2.2.1.2 Using the Revised Taxonomy for the Cognitive Analysis of Discussion Forums Every message in both forums was analysed using the revised taxonomy as a coding frame. Each message was individually reviewed and placed in one of the boxes in Table 3. Each message was assigned a code, ranging from A1 to D6, to represent its co-ordinates in the table. To aid coding, the framework was populated with additional information to assist with the placing of each message in an appropriate cell (see Table 7). This additional information was based on the advice provided by the original authors of the framework (Anderson and Krathwohl) but customised to reflect the dialogues that typically occur in the online environment. The table has tried to capture some of the unique affordances provided in an online forum, such as the ease with which a student can hyperlink to online resources. So, for example, providing a simple link to an online resource is considered remembering factual knowledge (A1); summarising an online discussion is an example of analysing conceptual knowledge (B4); and relating an online resource to a learning theory is an example of understanding meta-knowledge (D2). The table is essentially an attempt at updating the taxonomy to explicitly address online writing. The unit of analysis was a message. Both the taxonomy and practical inquiry methods are macro-analysis tools, best applied at message-level rather than smaller units (such as sentences or paragraphs). A message is also a natural unit of analysis since it is clearly Need for an E-Taxonomy Page 12

delineated and is consciously packaged by the learner. The selection of a complete message as the unit of analysis also simplifies the process of content analysis, and obviates one of main criticisms of traditional approaches (see page 28). The code assigned to each message represented the highest level of cognitive engagement in the message. So, if a single message exhibits more than one level of cognition, the higher level is coded. Two messages are provided in Figure 3 and Figure 4 to illustrate this. The message in Figure 6 was coded B2 understanding conceptual knowledge since it is asking questions about a concept (Dreyfus position on disembodied learning) to improve understanding. This example is unambiguous.

F IGURE 3: S AMPLE MESSAGE FROM IDEL COURSE

The message in Figure 3 was coded D2 (understanding meta-knowledge) since the author concludes the message by appraising her own learning style. Note that most of the message related to understanding conceptual knowledge (B2) since it provided a metaphor for a concept but the higher cognitive competence was coded.

Need for an E-Taxonomy

Page 13

F IGURE 4: S AMPLE MESSAGE FROM ULOE COURSE

Messages that contain more than one cognitive competence (according to the revised taxonomy) could have been multi-coded; that is, several codes could have been assigned to a single message to represent the various competences contained within it. However, both the original taxonomy and the revised taxonomy take a hierarchical approach to classification, and this approach (to identify the highest level of cognition only) was adopted in this study for consistency with the philosophy of the original authors. This approach also simplifies classification, since it assigns a single code to a single message, which was another goal of the methodology. Every message in both forums was accordingly assigned a code ranging from A1 to D6 as an indication of its cognitive level. To improve reliability, the messages were coded twice (coderecode technique). However, inter-rater reliability was not measured since the same person (myself) coded on both occasions. The resulting coding tables are provided in Appendix 1 and Appendix 2. These tables show the analysis of messages on each forum by week. An excerpt from the coding tables is provided in Table 5. The excerpt is provided for exemplification only.

Need for an E-Taxonomy

Page 14

Week IDEL1 IDEL2 IDEL3

A1 0 6 5

A2 0 1 2

A3 0 1 9

A4 0 0 0

A5 0 0 0

A6 0 0 0

B1 0 0 0

B2 5 4 0

B3 1 5 0

B4 1 4 3

B5 1 3 1

B6 0 0 0

C1 3 0 0

T ABLE 5: E XCERPT FROM CODING TABLES

This table shows a partial analysis of the IDEL forum. The rows indicate the week of the course; for example, IDEL2 is week two of the [IDEL] course. The columns indicate the coding for student contributions to the forum during that week; for example, six contributions were coded A1 (stated basic facts) during week two; three contributions were coded B5 (evaluating conceptual knowledge). These results are summarised and presented in context in subsequent sections of this paper. 2.2.1.3 Analysis of critical thinking The practical inquiry model proposed by Garrison, Anderson and Archer (2000) was used to analyse the critical thinking on the forums. This model is well grounded in theory, building on Deweys (1933) approach to learning, and is also widely used within (Higher) education. The model has four phases: 1. 2. 3. 4. triggering event exploration integration resolution.

Table 6 describes these phases in more detail. This model was designed to identify critical thinking, which Garrison et al defined as the extent to which learners are able to construct and confirm meaning through sustained reflection and discourse in a critical community of inquiry (Garrison, Anderson, & Archer, 2000).

Need for an E-Taxonomy

Page 15

Phase 1

Triggering event

Issue or problem is identified; this may be a learning challenge initiated by the teacher or an observation posted by a learner. The social exploration of ideas. In this phase, learners seek to comprehend the problem or issue, moving between discourse with other learners and critical reflection. Meaning is constructed from the ideas and comments produced in the previous phase. In this phase, ideas are assessed and connections are made between ideas. In this phase, conclusions are reached and some consensus is created between learners.

Phase 2

Exploration

Phase 3

Integration

Phase 4

Resolution

T ABLE 6: P RACTICAL ENQUIRY MODEL OF CRITICAL ANALYSIS

This method was used to analyse the contents of each forum because it is an established means of identifying and categorising critical thinking. To aid the models use as a classification tool for online forums, the model was augmented to provide additional guidance on its application in an online environment (see Figure 5). Note that the model is circular since the output from the resolve stage can be used as a trigger event. The additional information provided in Figure 5 illustrates the contents of messages in each category. For example, an obvious trigger (T) message is one that poses a question; a message that sought to clarify and expand on an initial question would be considered an explore (E) message; a message that identified underlying concepts would be considered an integrate (I) message; and a message that sought to summarise the discussion would be coded as a resolve (R) type message. The algorithm for encoding each message using this scheme depended on three factors: (1) the authors (presumed) intention; (2) the contents of the message; and (3) the actual effect of the message. In most cases, these factors concurred. A message intended to stimulate debate (a trigger message) was written accordingly (for example, as a question), and had the intended effect (to generate a response). Occasionally, a message did not have its intended effect. For example, a message intended to generate a discussion failed to produce any replies. In these circumstances, all three factors were considered in arriving at a code. For example, it may be coded as a (failed) trigger question because of its contents (perhaps a simple question), in spite of its outcome; or it may be encoded as an explore question because it not only asked a question (one that failed to get a response) but also provided some commentary. The actual choice of code would be a matter of judgement.

Need for an E-Taxonomy

Page 16

Knowledge

1.

Remember

2.

Understand

3.

Apply

4.

Analyse

5.

Evaluate

6.

Create

A. Factual

State relevant facts. Provide references/hyperlinks. Ask relevant questions. Describe relevant personal experiences. Factual replies to messages. Suggest relevant resources. Provide appropriate links.

Explain facts. Check facts. Clarify questions. Correct data/factual information. Classify facts. Summarise facts/data. Exemplify facts. Suggest related resources. Explain concept/system in own words. Seeks clarification about idea/concept. Give examples of idea. Describe systems. Make links between ideas. Ask questions about ideas/positions. Agree or disagree with reasons. Provide metaphors for concept. Seek clarification about concept/system. Describe a procedure or method. Describe a relationship. Discuss a procedure or method. Clarify a procedure. Ask questions about method/relationship. Describe personal experience of system/method. Relate conceptual knowledge to a procedure/method. Describe learning strategies. Explain the nature of knowledge. Describing personal limitations or abilities related to learning or course content. Relate resource to learning theory.

Perform calculations. Relate factual information to self or others. Apply factual information to self or others. State opinion in relation to facts provided. Suggest use of resources to self or others. Follow a system. Apply a method. Identify relationships. Relate concept to self. Describes personal experience of concept. Apply concept to self. Suggest uses of concept. Suggest resources linked to concept.

Determine point of view about facts/factual information. Check facts/information for accuracy. Determine bias in data/factual information. Select data. Organise data. Discriminate between facts. Compare facts. Summarise data. Provide factual summary.

Apply criteria to factual information. Assess factual information using criteria. Evaluate resources.

Generate factual information. Produce hypotheses about factual information. Create factual scenarios. Create criteria for evaluating factual information. Evaluate personal experience factually. Create original data. Create digital resource with factual use. Make a relationship. Devise a system. Synthesise a discussion. Provide an insight. Propose changes to a system or relationship. Create criteria to evaluate an idea/system. Create an original idea/method.

B. Conceptual

State theories/concepts. Reference concepts/ideas. Quote ideas/concepts. State systems. State relationships.

Compare ideas/concepts/ systems. Break down ideas/systems. Agree or disagree with concepts/methods with reasons. Summarise a discussion/ concept/system.

Judge a system or relationship. Assess a concept using criteria. Apply criteria to an idea or system. Challenge an idea/position using criteria or references.

C. Procedural

State a procedure or method. Refer to a relationship or method. Describe how to do something.

Follow a procedure. Relate procedure/method to self. Describe personal use of system/procedure.

Break down a procedure or method. Identify components within a procedure. Compare steps/stages. Compare methods of doing something. Summarise a procedure.

Judge a procedure/system using criteria or references. Assess the criteria for a procedure.

Devise a procedure or method. Propose justified changes to a system/procedure. Write criteria for judging a procedure or method. Propose a new system.

D. Meta

Describe learning theories. State facts about self in relation to course content. State facts about nature of knowledge. State facts about learning. Refer to learning strategies.

Use knowledge of self to further learning. Apply learning strategies. Reflect on professional practice related to course. Relate technology to own learning.

Analyse self using criteria. Identify personal strengths and weaknesses. Apply learning strategies. Identify bias in self or learning theory.

Compare learning strategies. Evaluate self using criteria. Judge own learning. Assess learning strategies. Assess resource for learning.

Devise a learning strategy. Modify an existing learning strategy. Create a new resource for learning.

T ABLE 7: CODING TABLE FOR CON TENT ANALYSIS OF COGNITION WITH EXEMPLIF ICATION OF ONLINE WRITING

Page| 17

Every message in both forums was encoded T(trigger), E (explore), I (integrate) or R (resolve) to denote the type of critical thinking it contained. Appendix 1 and Appendix 2 provide completed coding tables for each week in each forum.
RESOLVE Summarise discussion. Synthesise key points. Build consensus. Create a solution. Defend a solution. Implement idea. Use resource. Add to knowledge. Test ideas. Make conclusions. TRIGGER Describe personal experience. Summarise a reading/position. Comment on reading. Ask a question. Identify issue. Suggest reading. Point out resource. Propose a learning challenge. Recognise a problem. Take discussion in a new direction.

INTEGRATE Identify relevancy of issue/resource. Propose how idea links to other ideas. Suggest use of resource. Suggest how to apply idea. Suggest how idea/resource "fits in". Agree/disagree with substantiation. Suggest criteria for testing. Link to previous learning. Assess idea. Relate issue/position to learning.

EXPLORE Reply to a question Develop an existing issue. Clarify nature of issue. Exchange information. Suggest related resource/reading. Identify what is relevant. Provide an unsupported opinion. Agree/disagree without substantiation. Brain storm.

F IGURE 5: C RITICAL INQUIRY MODEL USED TO ENCODE FORUMS WITH EXEMPLIFICATION OF ONLINE WRITING

2.2.2 Method 2: Questionnaire Each student in both courses was asked to complete an online questionnaire. The overall aim of the questionnaire was to gauge student attitudes towards the use of discussion boards for learning and assessment. More specifically, the questionnaire sought to:

Page | 18

1. provide demographic information 2. indicate previous experience (of learning and technology) 3. gauge attitudes towards learning in general and the use of discussion boards for learning in particular 4. gauge attitudes towards assessment in general and the use of discussion boards for assessment in particular. The resulting information would be used to interpret and add context to the findings from the content analysis. The questionnaire was available online (only). It consisted of 22 questions, taking approximately 15 minutes to complete. It was made available to students on both courses from mid-November to end-December, 2009 (six weeks). Reminders were sent periodically (three in total) to students who had not completed it. Twenty three students completed the questionnaire, out of a population of 33 students, representing a 70% completion rate. The response rate for each course is shown in Table 8.
Completed IDEL ULOE Total 13 10 23 Possible 21 12 33 % response 62% 83% 70%

T ABLE 8: QUESTIONNAIRE COMPLE TION RATES

2.2.2 Strengths and weaknesses of the research design The validity and reliability of the results were affected by two factors: differences in the nature of the two student groups (populations), and ambiguities in the coding of student contributions. 2.2.2.1 Differences in populations There were significant differences in the two groups of students. IDEL is typically undertaken as an introductory course, whereas ULOE is typically chosen by more experienced students. That was the case in this study. Eleven of the 13 students undertaking the IDEL course were first-timers (students new to the Masters

Page | 19

programme); only three of the 10 students on the ULOE course were new to the programme. The relative inexperience of the IDEL students would be expected to have positive and negative impacts on the results. On the positive side, it may stimulate participation (in terms of the quantity of their contributions). On the negative side, it may reduce quality (it terms of its academic standard). Another difference was the use of online tools within each course. IDEL used a variety of synchronous and asynchronous discussion tools (including a VOIP application and a virtual world) of which the discussion forum was only one, albeit the main, means of dialogue. ULOE only used the discussion forum. Other variables would also impact the results. IDEL is a mandatory course, whereas ULOE is optional. IDEL had two (female) facilitators, whereas ULOE had one (male) facilitator. And, of course, the course contents are different, ULOE being more theoretical in nature than IDEL. It would be more valid to compare the same course, rather than two different courses. The ideal scenario would be to study the same course delivered at the same time, to similarly experienced students, with the only significant variable being assessment (one being assessed and one not). Fully controlled conditions such as this were not possible. As a result, it is not possible to make unequivocal claims about the impact of assessment on online discussions. In spite of these differences, it is contended that the findings are strongly suggested by the results. The two groups of students were homogenous in many ways. All were undertaking the same post-graduate programme. Their academic backgrounds were very similar. The standards expected from both groups were broadly equivalent, as were the standards of tuition.7 And their attitudes towards learning, as evidenced by the survey, were analogous.

The university employs quality assurance processes to ensure acceptable standards of teaching across the programme.

Page | 20

The population differences may be advantageous for the content analysis since they provide two distinct (in some respects) groups on which the methodology can be tested. 2.2.2.2 Reliability of coding Neither coding scheme was defined in such a way that permitted the allocation of codes with complete accuracy. This particularly applied to the encoding of messages using the taxonomy. The allocation of messages to a specific code (A1 through D6) occasionally involved the selection of one code from two (and sometimes more) potential choices. For example, a particular message may unequivocally relate to conceptual knowledge (the B scale) but its placement in the cognitive dimension was often uncertain, resulting in more than one code being potentially applicable (such as B4 or B5). While the encoding of messages using the critical enquiry model was clearer (fewer classes and clearer delineation), even here there was occasional ambiguity. Nonetheless, this uncertainty does not invalidate the results. An element of subjectivity is inherent in qualitative analysis. The assigned codes were doublechecked to improve reliability; the code-recode results were consistent8. Those messages that were ambiguous are likely to broadly balance themselves out over the analysis of 1400 messages. The response rate to the questionnaire (70%) was high, providing confidence in the results. The anonymity offered to respondents is likely to improve the reliability of the results. However, the missing respondents (30%) may have significantly affected the results of the survey. The approach adopted in this study obviated some of the traditional problems with content analysis the fatal flaws of faulty definitions of categories and nonmutually exclusive and exhaustive categories (Stemler, 2001). The use of a priori coding frames, based on a long established academic frameworks, significantly reduced the likelihood of these problems.

Fifty-three messages were assigned a different code during recoding, representing 7.6% potential error-rate.

Page | 21

2.3 Ethics
This research was considered to have few ethical implications. The main ethical considerations related to permission and privacy. The British Psychological Societys Code of Practice (2009) was followed during this study. Permission was sought from the programme leader and each of the course tutors. Permission was sought from every student (in both courses) to include their messages in the analysis of the forums.9 Students were assured that no-one would be identified or identifiable in the resulting documentation. Respondents to the survey were given the option of completing it anonymously. Seventeen of the 23 respondents (74%) elected to identify themselves, confirming the low ethical implications of this research.

This was done by course tutors informing students about this research and inviting any student who did not wish their contributions to be included in the analysis to contact me. None did.

Page | 22

3: Findings 3.1 Findings from the student survey


The student survey was used for qualitative and quantitative purposes. Qualitatively, it aimed to gauge attitudes towards learning and assessment, and their experiences of using an online forum. Quantatively, it sought to measure such variables as demographics and previous experiences. The findings were used to contextualise the results from the analyses of participation and contents. The questionnaire made it optional to disclose each respondents identity (by supplying or not their student ID). None of the ULOE students chose to protect their identity; half of the IDEL students did. The two cohorts have similar demographics in terms of their ages, native language, educational backgrounds, employment status and IT skills. There were more women students in the IDEL course (60%) than the ULOE course (30%). Not surprisingly, given that IDEL is an introductory course, their previous experience on the programme varied. Most of the students in the IDEL course (85%) were new to the programme (this was their first course), compared to 30% of the ULOE students. Their attitudes to learning were similar. Both groups accepted responsibility for learning, seeing the tutors role as facilitation rather than direction. They also liked online forums in equal measure, viewing them as conducive to learning, facilitating deep learning, and improving engagement with their courses. There was a particularly positive response with regard to engagement, with 80% of both groups reporting an improvement through their use of the forums. Both groups reported that online forums could be used for all types of learning (using Blooms classification). However, IDEL students felt that the forums were more suited for shallower learning (the lower levels of Blooms Taxonomy) than ULOE students, who reported their suitability for all levels of learning.

Page | 23

IDEL students claimed to have less time to devote to their forum than ULOE students. Twice as many IDEL students compared to ULOE students (as a proportion) reported time constraints on their contributions to the forums. They also felt that certain students sought to dominate discussions, in marked contrast to ULOE students (50% of IDEL students reported this, compared with 22% of ULOE students). Students attitudes to assessment reflected the mode of assessment employed in their courses. IDEL students were generally opposed to the assessment of their online contributions (75% were against it) and ULOE students were generally in favour of it (78% approved of it). In spite of their antipathy towards assessment (of their online contributions), 50% of IDEL students expressed the view that assessment would improve their motivation to contribute to the forum.

3.2 Findings from the participation analysis


Three metrics were identified and measured in each forum as a proxy for student participation. These were: 1. mean number of views per student 2. mean number of posts per student 3. mean number of words per student. These findings are summarised in Table 9 and Table 10. Student participation was similar in two metrics: the mean views and the mean words per post. But the third metric (mean posts per student) was significantly different, with ULOE students posting, on average, approximately 50% more messages than IDEL students.

Page | 24

Total posts Topic Tutor IDEL1 IDEL2 IDEL3 IDEL4-5 IDEL6-7 IDEL8-9W IDEL8-9H IDEL10 IDEL11 SUMMARY 20 9 7 32 17 13 5 7 6 116 Students 130 29 28 150 110 70 13 43 34 607
FOR

Total views

Total words 39593 7317 7332 25052 31463 17855 4360 9819 9251 152042

Mean posts per student 6 1 1 7 5 3 1 2 2 29

Mean views per student 60 19 14 74 47 30 9 20 19 292

Mean words per post 264 193 209 138 248 215 242 196 231 210

1264 405 294 1547 985 620 190 414 405 6124 IDEL

T ABLE 9: P ARTICIPATION METRICS

This difference does not appear to be caused by language or technical skills. Neither group of students reported their technical skills as a barrier to them contributing to the forums. Two students reported that their language skills inhibited them but both were on the ULOE course.
Total posts Topic Tutor ULOE1-2 ULOE3-4 ULOE5-6 ULOE7-8 ULOE9-10 General Reference SUMMARY 19 27 17 31 8 25 17 144 Students 107 90 47 89 48 80 52 513 601 704 325 512 263 477 326 3208 23501 26270 15051 23965 10923 17393 14846 131949 Total views Total words Mean words per post 187 225 235 200 195 166 215 201

Mean posts per student 9 8 4 7 4 7 4 43

Mean views per student 84 59 27 43 22 40 27 267

T ABLE 10: P ARTICIPATION METRICS FOR ULOE

Page | 25

3.2.1 Participation index These data were used to compute a participation index (PI), which is the product of the mean views and the mean posts per student. The PI was created to provide a single, simple metric that can serve as a proxy for the overall participation in each course each week. Although the resulting PI value has no intrinsic meaning, it has relative significance since it is computed by multiplying two numbers with meaning (mean views and mean posts). The resulting metric can, therefore, be used to compare participation. The PI values for both forums are provided in Table 11.10
Week 1 2 3 4 5 6 7 8 9 10 11 IDE Views 60.2 19.3 14.0 36.8 36.8 23.5 23.5 19.3 19.3 19.7 19.3 IDEL Posts 6.2 1.4 1.3 3.6 3.6 2.6 2.6 2.0 2.0 2.0 1.6 IDEL PI 373 27 19 132 132 61 61 38 38 40 31
FOR EACH FORUM

ULOE Views 47.9 47.9 35.4 35.4 19.6 19.6 27.4 27.4 17.0 17.0 6.1

ULOE Posts 5.5 5.5 4.8 4.8 3.0 3.0 4.7 4.7 3.0 3.0 1.0

ULOE PI 261 261 168 168 58 58 129 129 51 51 6

T ABLE 11: P ARTICIPATION INDICES

The creation of a single metric to represent overall participation facilitates the visualisation of participation. The PIs for the IDEL and ULOE forums are illustrated in Figure 6.

10

See page 15 for an explanation of how the forums were broken down into weekly contributions.

Page | 26

400 350 300 250 200 150 100 50 0 1 2 3 4 5 6 7 8 9 10 11 IDEL PI ULOE PI

F IGURE 6: P ARTICIPATION INDICES

COMPARED

The resulting graphs appear to be accurate depictions of the actual activity in each forum over the semester. Both graphs, as you would expect, start high and finish low. ULOE exhibits the classic pattern for online participation (high initial activity, followed by declining activity, with low activity at the end). The IDEL graph is a little less traditional, with activity increasing and decreasing sharply between some weeks but, even here, the overall pattern is high to low.11 The graphs show that participation was higher in ULOE in seven of the eleven weeks. IDEL was more active in three of the weeks (week 6 being equal for both courses). Participation was also more evenly distributed (less spikey) across the semester in the ULOE course.

11

The weekly fluctuations in the IDEL forum are at least partly explained by the nature of some of the weekly activities. For example, some weeks focussed on Skype communications when forum activity would be expected to decline.

Page | 27

3.3 Findings from the Content Analysis


The results of the content analysis by taxonomy and critical analysis are shown in Appendix 1 and Appendix 2. Extracting the taxonomy data, and comparing both courses in this respect, produced Table 12. Figure 7 illustrates these data sets graphically.
Factual IDEL ULOE 21% 11% Conceptual 22% 59% Procedural 23% 18% Meta 2% 5% None 32% 7%

T ABLE 12: S UMMARY OF FORUMS USING TAXONOMY ANALYSIS

The IDEL forum was significantly more factual and slightly more procedural than the ULOE forum. One in five posts on the IDEL forum stated basic facts or unsubstantiated factual opinions. This was double the rate of the ULOE forum.
70% 60% 50% 40% ULOE 30% 20% 10% 0% Factual Conceptual Procedural Meta IDEL

F IGURE 7: C OMPARISON OF FORUMS USING TAXONOMY

Neither forum exhibited much meta-knowledge. The striking difference is the amount of conceptual knowledge in each forum (see Figure 7).

Page | 28

Slightly more than one in five messages (22%) in the IDEL forum related to conceptual knowledge; the most common sub-class was understanding conceptual knowledge with 52 occurrences. This compares with almost 60% of messages in ULOE relating to conceptual knowledge; the most common sub-class was analysing conceptual knowledge with 121 occurrences. Another striking difference is the proportion of posts that were not academic in nature. Almost one in three posts in the IDEL forum was a social message or a request for support. This compares to one in 16 in the ULOE forum. Extracting the critical inquiry data from the tables, produces the following summary (see Table 13).12
IDEL Trigger Explore Integrate Resolve 17% 27% 20% 4% ULOE 24% 27% 23% 9%

T ABLE 13: A NALYSIS OF FORUMS BASED ON CRITICAL THINKING

These data are illustrated in Figure 8.


30% 25% 20% 15% 10% 5% 0% T E I R IDEL ULOE

F IGURE 8: C OMPARISON OF CRITICAL THINKING IN THE FO RUMS

12

The percentages do not add to 100 since social and technical messages were excluded.

Page | 29

The analysis of critical thinking produced fewer differences than the taxonomy analysis. In fact, the forums were broadly equivalent in terms of this analysis. The main differences relate to the number of trigger questions and the number of messages that sought to resolve discussions. ULOE included (proportionately) more trigger-type questions than IDEL. Similarly, and more significantly, ULOE included twice as many resolve-type questions than IDEL. However, both percentages were low, indicating that few [student] messages, in either forum, sought to summarise or synthesise discussions.

3.4 Implications for assessment


Assessment appears to have had a positive effect on participation, academic quality, and attitudes towards learning. From a participatory perspective, ULOE students made significantly more posts than IDEL students (see page 26), which is consistent with previous findings on the motivational effect of assessment. Less predictably, the quality of contributions made by ULOE students was significantly higher, as measured by the taxonomy (see page 28). There were considerably more conceptual discussions, and fewer factual discussions, in the ULOE forum than in the IDEL forum. Assessment appears to have had a positive effect on the attitudes of students towards online discussions, with ULOE students being more satisfied than IDEL students with their experiences in their respective forums. The use of the taxonomy to analyse the forums provided a useful insight into the nature of the discussions that took place within them. The resulting analysis (see Table 12) provides a breakdown of the type of discussion from a cognitive perspective. This information could inform assessment. It would help with assessment-setting (what faculty wants from students) and it would aid marking by differentiating different types of student contribution. It would also standardise language. Adopting an e-taxonomy would provide a consistent vocabulary for faculty to employ when planning, setting and appraising student activities.

Page | 30

4: Conclusions and Recommendations 4.1 Conclusions from the study


The conclusions from the study are presented as answers to the research questions, which are stated below. 1. Can a taxonomy be used to analyse the learning that takes place in an online environment? 2. Can a taxonomic approach to the analysis of online writing be used to aid assessment? The conclusions are presented in the context of these questions. 4.1.1 Can a taxonomy be used to analyse the learning that takes place in an online environment? The content analysis methods employed in this study proved to be effective means of analysing the academic interactions on the forums. The analysis by taxonomy worked well, providing an insight into the nature (and types) of academic discourse in each forum. The critical analysis method also worked well, albeit producing less illuminating results. A significant conclusion from this study is that the proposed analysis by taxonomy is a workable and effective means of analysing online discussions. The content analysis by taxonomy proved to be an effective tool for analysing the cognitive quality of online discussions. The results of this analysis are shown in Table 12 and illustrated in Figure 7 (see page 28). The technique also appears to have predictive value: the research design (see page 6) suggested that ULOE was more conceptual than IDEL, which was confirmed by the results of the analysis. However, the traditional Anderson and Krathwohl taxonomy requires to be revised to incorporate the new affordances of digital writing. The table on page 17 begins the process of customising the taxonomy for this purpose but further work is needed to produce a truly updated taxonomy for the online environment. Like Blooms original taxonomy, and subsequent revisions of it, an updated taxonomy would have important implications for assessment, providing a framework for the scrutiny of online writing and the construction of rubrics.

Page | 31

The critical analysis method also worked well, although it produced less differentiated results. It is the conclusion of this study that the practical enquiry model is a useful way of categorising contributions (as trigger, explore, integrate or resolve-type contributions) but has less direct application to assessment than the taxonomy. However, the associated terminology has the potential to improve pedagogy and aid rubric construction. The practical enquiry model (see Table 13 and Figure 8 on page 29) also confirmed the quality of the discussions. Both forums included a significant number of messages that spanned the trigger, explore and integrate phases of this model; the resolve phase was less evident. The practical inquiry model proved to be a simple and reliable method of categorising contributions and one that can be easily applied to various forms of online writing. Both methods of content analysis (the taxonomy and the practical inquiry model) provide a higher level view of online discourse than traditional content analysis methods. Both methods were easily applied and produced rapid results (compared with traditional content analysis methods). The creation of a participation index (PI) (see page 25) proved to be a useful way of visualising participation (see Figure 9). This metric, representing the product of mean views and mean posts per student, while having no intrinsic meaning, provided a single, simple measure of participation that can be used to summarise and compare participation between forums. It also facilitates graphical representation, providing a simple visualisation of participation over time. It could be applied to various forms of online writing where the component metrics (viewing and posting statistics) are known. 4.1.2 Can a taxonomic approach to the analysis of online writing be used to aid assessment? The effect of assessment was considered from two perspectives: (1) learner participation; and (2) quality of discussion.

Page | 32

4.1.2.1 Apparent effect of assessment on participation There were significant differences in the two forums with respect to student participation. ULOE students, on average, posted 50% more messages than IDEL students (see Table 9 and Table 10 on page 25). Furthermore, their participation was more uniformly spread across the semester (see Figure 6 on page 27). This could be attributed to a number of variables, but an important difference between the forums was their assessment. Student responses to the questionnaire showed significant differences in their attitudes towards assessment.13 One in four IDEL students supported the assessment of their contributions; this compared to three in four ULOE students. Although a minority of IDEL students wanted assessment, half of them believed that it would improve their motivation and the time spent on the forum. Two thirds of ULOE students believed that assessment improved their motivation and increased the time spent on the forum. This confirms earlier research linking assessment to participation, which indicated that assessment does influence learners use of online forums. In a study of 262 university students, attending 18 different online graduate courses in the United States, Rovai (2003) concluded that grading learner contributions to discussion boards increased their participation (as measured by number of messages posted). He also stated that some form of assessment was needed in order to ensure that students participate. The number of marks given to the activity was not critical (low marks engendered engagement as much as high marks) but some form of assessment incentive was needed. 4.1.2.2 Effect of assessment on the quality of discussion Assessment appeared to have two effects of the quality of discussion: it improved quality, as measured by the level of academic discourse taking place, and it altered the nature of messages.

13

The survey was carried out after the courses were complete so, it is presumed, that their experiences during the respective courses influenced the responses.

Page | 33

The findings on page 28 show that the assessed forum had a significantly higher proportion of messages relating to course concepts than the non-assessed forum. Also, the assessed forum had significantly fewer non-academic (social and support) messages. However, this cannot be conclusively attributed to assessment. Other variables, such as the nature of the course contents or the prior experience of the student groups, may have played a significant role (see 2.2.2 Strengths and weaknesses of the research design on page 19). But the apparent positive effects of assessment seen in this small study are consistent with previous larger studies.

4.2 Recommendations
Based on these conclusions, the following recommendations are made. 1. Traditional taxonomic approaches to categorising academic discourse can be used to analyse online writing. 2. An e-taxonomy should be created for the purpose of analysing online writing and improving assessment practice. Each recommendation is now explained. 4.2.1 Recommendation 1: Traditional taxonomies can be used to analyse online writing This study used a traditional taxonomic approach to the analysis of online forums, and found that this produced worthwhile results. The reasons for using educational taxonomies appear to be as valid in the online domain as the traditional classroom. The application of the taxonomy provided a framework for describing learner contributions, and worked well as a means of analysing and classifying online writing, which has implications for the assessment of this activity (see below). The taxonomy operates at a macro (whole message) level and, in consequence, obviates some of the drawbacks of traditional content analysis methods.

Page | 34

Unsurprisingly, it was discovered that traditional taxonomies require to be customised for the online environment, to take full account of the unique affordances of digital writing. This study showed that such customisations are possible and, once done, greatly assist with the process of analysing contributions. 4.2.2 Recommendation 2: An e-taxonomy should be created for the purpose of analysing online writing and improving assessment practice Recommendation 1 stated that traditional taxonomies require customisation to embrace the unique affordances of online writing. The updated taxonomy could be described as an e-taxonomy. An e-taxonomy has the potential to improve online practice in much the same way that Blooms original taxonomy had a positive impact on classroom practice. The original taxonomy provided teachers with a consistent way of defining curricula, and constructing (and marking) examination papers. An e-taxonomy could confer similar benefits to the online learning environment. An e-taxonomy would improve assessment practice by providing a common framework (and vocabulary) for appraising learner contributions to online discussions, and has the potential to improve validity and fidelity.

4.3 Applicability of these recommendations to other forms of online writing


While this paper has concentrated on the analysis and assessment of online forums, the recommendations could be applied to other forms of online writing. They are particularly relevant to asynchronous writing. An e-taxonomy could be used as the basis for analysing online writing in various forms, such as that evident in social networks (such as Facebook or Google+), blogs, and wikis. For example, a learning circle in Google+ could be assessed using a customised version of the etaxonomy. The taxonomy could be revised to reflect the unique affordances of Google+, and this could be used as the basis of a rubric to assess learners contributions to the circle.

Page | 35

The recommendations also apply to synchronous online writing, such as the interaction that may take place in a virtual world. A taxonomy would aid the interpretation of learner activity; assessment would motivate learners in this environment as in any other; and a rubric based on the taxonomy would provide a transparent, equitable and valid means of assessment. However, the taxonomy table would require greater customisation for the synchronous environment than it would for asynchronous interaction. The terminology and classification system provided by the practical enquiry model (see Table 6 on page 16) could similarly be used to analyse and assess online writing in various forms. The trigger, explore, integrate and resolve categorisations are applicable to synchronous and asynchronous interactions.

4.4 Limitations of the study


This was a small study, consisting of a limited number of students on a single academic programme (see page 6). The limitations of the research design have been fully described (see page 19). However, the application of a customised version of a traditional taxonomy to the analysis of online forums was successful, and produced a useful insight into the nature of the academic discourse taking place in that environment. Although the principle of applying a taxonomy to the online environment appears to be vindicated, the practice was flawed. Ambiguities in coding have already been described (see page 19), and the customisation of the Anderson-Krathwohl taxonomy (see page 17) requires significant refinement to fully embrace the affordances of the online environment. This small study looked at ways of analysing student contributions to online discussion forums. It sought to apply a modified version of a traditional educational taxonomy to this environment. It found this to be workable and instructive. The methodology produced findings that were consistent with expectations (with regard to the nature of the two forums studied, one being more academic than the other) and previous research on assessment (the motivational impact of assessment on participation).

Page | 36

It suggests that an e-taxonomy would improve assessment practice by standardising the language used to describe and assess learner activity in the online domain. Such a taxonomy has the potential to improve professional practice in the 21st Century in a similar way to the improvements effected by the original Taxonomy in the 20th Century. The Web 2.0 tools, which form such a crucial part of the contemporary learning environment, could be employed to permit the academic community to collaboratively construct this new taxonomy.

Page | 37

APPENDICES

Page | 38

Appendix 1: Coding table for IDEL forum

Taxonomy code Topic A1 IDEL1 IDEL2 IDEL3 IDEL4-5 IDEL6-7 IDEL8-9W IDEL8-9H IDEL10 IDEL11 SUMMARY 0 6 5 18 6 4 2 2 0 43 A2 0 1 2 17 12 7 0 5 2 46 A3 0 1 9 11 15 0 0 0 0 36 A4 0 0 0 0 0 0 0 1 0 1 A5 0 0 0 0 0 0 0 0 0 0 A6 0 0 0 0 0 0 0 0 0 0 B1 0 0 0 0 0 0 0 0 1 1 B2 5 4 0 7 12 13 2 8 1 52 B3 1 5 0 5 7 5 0 1 2 26 B4 1 4 3 9 4 15 3 3 8 50 B5 1 3 1 0 0 0 0 1 1 7 B6 0 0 0 0 0 0 0 0 0 0 C1 3 0 0 0 0 0 0 0 0 3 C2 22 0 0 0 1 0 0 1 3 27 C3 62 1 3 0 1 0 0 5 1 73 C4 31 0 0 0 3 0 0 4 1 39 C5 0 0 0 0 0 0 0 0 0 0 C6 0 0 0 0 0 0 0 0 0 0 D1 0 0 0 0 0 0 0 0 0 0 D2 0 0 0 0 0 0 0 0 1 1 D3 0 0 1 0 3 0 0 0 0 4 D4 0 0 1 0 4 0 0 0 0 5 D5 0 0 0 0 0 0 0 0 0 0 D6 0 0 0 0 0 0 0 0 0 0

Critical analysis code T 40 10 5 17 9 12 2 6 5 106 E 54 9 12 40 21 10 3 15 2 166 I 25 5 8 10 35 16 2 8 10 119 R 7 1 0 0 3 6 0 2 4 23

Page | 39

Appendix 2: Coding table for ULOE forum

Taxonomy code Topic A1 ULOE1-2 ULOE3-4 ULOE5-6 ULOE7-8 ULOE9-10 General Reference SUMMARY 1 3 1 3 1 9 6 24 A2 0 1 0 0 1 4 4 10 A3 0 4 1 0 0 0 16 21 A4 0 1 0 0 1 0 0 2 A5 0 1 0 0 0 0 0 1 A6 0 0 0 0 0 0 0 0 B1 1 1 1 1 0 1 8 13 B2 10 5 3 10 7 10 1 46 B3 17 10 8 15 6 12 7 75 B4 38 31 7 9 18 15 3 121 B5 20 12 2 2 3 0 0 39 B6 0 0 0 1 0 10 0 11 C1 0 0 0 1 0 1 0 2 C2 0 1 0 5 0 0 0 6 C3 0 3 4 18 1 8 1 35 C4 4 4 4 11 6 1 0 30 C5 0 1 11 4 2 0 0 18 C6 0 0 0 0 0 0 0 0 D1 0 0 0 0 0 0 0 0 D2 4 5 0 0 0 1 0 10 D3 0 1 0 0 0 1 0 2 D4 7 2 0 1 0 1 0 11 D5 1 0 0 0 0 1 0 2 D6 0 0 0 0 0 0 0 0 T 32 28 8 26 7 14 8 123

Critical analysis code E 48 35 22 25 20 27 16 193 I 22 11 8 27 14 18 19 119 R 2 11 4 3 5 16 3 44

Page | 40

Appendix 3: Week by week comparison

Week 1 2 3 4 5 6 7 8 9 10 11

Views 60.2 19.3 14.0 36.8 36.8 23.5 23.5 19.3 19.3 19.7 19.3

IDEL Posts 6.2 1.4 1.3 3.6 3.6 2.6 2.6 2.0 2.0 2.0 1.6

IDEL PI 373 27 19 132 132 61 61 38 38 40 31

Views 47.9 47.9 35.4 35.4 19.6 19.6 27.4 27.4 17.0 17.0 6.1

ULOE Posts ULOE PI 5.5 5.5 4.8 4.8 3.0 3.0 4.7 4.7 3.0 3.0 1.0 261 261 168 168 58 58 129 129 51 51 6

Page | 41

REFERENCES
Glossary of assessment terms. (2002, September). Retrieved January 14, 2010, from New Horizons: http://www.newhorizons.org/strategies/assess/terminology.htm Anderson, L. W., & Krathwohl, D. R. (2001). A taxonomy for learning, teaching and assessing. New York: Addison Wesley Longman. Angelo, T. A. (1995). Definition of assessment. AAHE Bulletin. Berliner, D. C. (1987). But do they understand? In V. Richardson-Koehler, Educator's Handbook: A research perspective (pp. 259-293). New York: Longman. Bradwell, P. (2009). The edgeless university: Why Higher Education must embrace new technology. London: Demos. Campos, M. (2004). A constructivist method for the analysis of networked congnitive communication and the assessment of collaborative learning and knowledge building. Journal of Asynchronous Learning Networks. Chi, M. (1997). Quantifying qualitative analyses of verbal data: a practical guide. The Journal of the Learning Sciences, 271-315. Clarke, I., Flaherty, T. B., & Mottner, S. (2001). Student perceptions of educational technology tools. Journal of Marketing Education, 169-177. Curtis, D., & Lawson, M. (2001). Exploring collaborative online learnin. Journal of Asynchronous Learning Networks 5(1), 2134. Dennen, V. P. (2008). Pedagogical lurking: student engagement in non-posting discussion behavior. Computers in Human Behavior, 1624-1633. Dewey, J. (1987). My pedagogic creed. School Journal, 77-80. Dressel, P. (1976). Grades: one more tilt at the windmill. Center for the Study of Higher Education Bulletin. Garrison, D. R., Anderson, T., & Archer, W. (2001). Critical thinking, cognitive presence, and computer conferencing in distance education. . American Journal of Distance Education. Garrison, D., Anderson, T., & Archer, W. (2000). Critical inquiry in a text-based environment. Computer conferencing in Higher Education, 1-19. Gray, K., Waycott, J., Hamilton, M., Richardson, J., Sheard, J., & Thompson, C. (2009). Web 2.0 authoring tools in Higher Education learning and teaching: New directions for assessment and academic integrity. Discussion paper for national roundtable on 23 November 2009. Melbourne: Austrialian Learning & Teaching Council. Grice, H. P. (1989). Studies in the Way of Words. Cambridge MA: Harvard University Press. Gunawardena, C. N., Lowe, C. A., & Anderson, T. (1997). Analysis of a global online debate and the development of an intereaction analysis model for examining social

Page | 42

construction of knowledge in computer conferencing. Educational Computing Research, pp. 397-431. Hazari, S. (2004). Strategy for the assessment of online course discussion. Journal of Information Systems Education, 349-355. Henri, F. (1992). Computer conferencing and content analysis. Collaborative learning through computer conferencing: The Najaden papers, 117-136. Ho, C. H. (2004). Assessing Electronic Discourse: A Case Study in Developing Evaluation Rubrics. August 14, . Paper presented at the 14th Annual Meeting of the Society for Text and Discourse. Chicago. Hoag, A., & Baldwin, T. (2000). Using case method and experts in inter-university electronic learning teams. Educational Technology and Society 3(3), 337348. Hrastinski, S. (2009). What is online participation and how may it be studied in e-learning settings? Computers & Education, 78-92. Jaffe, R., Moir, E., Swanson, E., & Wheeler, G. (2006). EMentoring for Student Success: Online mentoring and professional development for new science teachers. In C. Dede, Online professional development for teachers: Emerging models and methods (pp. 89 11). Cambridge MA: Harvard Education Press. Jonassen, D., Davison, M., Collins, M., Campbell, J., & Bannan Haag, B. (1995). Constructivism and computer-mediated communication in distance education. The American Journal of Distance Education, 7-26. Knox, E. L. (2007). The rewards of teaching on-line. Retrieved January 4, 2010, from H-Net: http://www.h-net.org/aha/papers/Knox.html McKeachnie, W. J. (1986). Teaching tips: a guidebook for the beginning college teacher. Massachusetts: D C Heath and Company. Mehta, S. (2009, February 11). Developing rubrics for evaluating online discussions. Retrieved December 31, 2009, from University of Texas: http://clear.unt.edu/Content/Home/BrownBagImages/Discussion_Evaluation_Han douts_Mehta.pdf Microsoft, Cisco, Intel. (2009). Transforming education: assessing and teaching 21st Century skills. Retrieved January 20, 2010, from Microsoft.com: http://download.microsoft.com/download/2F6E9A7CA7-0DC4-4823-993EA54D18C19F2E/Transformative%2520Assessment.pdf New Media Consortium and Educause. (2008). The Horizon Report. New Media Consorium. Newman, D., Webb, B., & Cochrane, C. (1995). A content analysis method to measure critical thinking in face-to-face and computer supported group learning. Interpersonal Computing and Technology, 56-77. Not-attributed. (n.d.). 8 ways to incorporate online discussion into evaluation. Retrieved December 26, 2009, from University of New South Wales: http://resources.elearning.unsw.edu.au/content/lt_assessment_discussions.pdf Page | 43

Pea, R. D. (1993). Practices of distributed intelligence and designs for education. Cmabridge: Cambridge University Press. Pelz, W. (2004). My three principles of effective online pedagogy. Journal of Asynchronous Learning Networks, 3346. Pickett, N. (2007, March 17). Rubrics for Web Lessons. Retrieved January 4, 2010, from http://edweb.sdsu.edu/triton/july/rubrics/Rubrics_for_Web_Lessons.html Riel, M., & L, P. (2004). Online communities: Common ground and critical differences in designing technical environments. In S. A. Barab, R. Kling, & G. J. H, Designing for virtual communities in the service of learning (pp. 1650). Cambridge, Mass: Cambridge University Press. Rovai, A. P. (2000). Online and traditional assessments: What is the difference? The Internet and Higher Education, 141-151. Rovai, A. P. (2003). Strategies for grading online discussions: Effects on discussions and classroom community in Internet-based university courses. Journal of Computing in Higher Education, 89-107. Sadler, D. R. (2009). Fidelity as a precondition for integrity in grading academic achievement. Assessment & Evaluation in Higher Education, pp. 1-17. Sahu, C. (2008). An evaluation of selected pedagogical attributes of online discussion boards. Proceedings ascilite Melbourne 2008, (pp. 861-865). Melbourne. Shea, P. F., Pickett, A., Pelz, W., & Swan, K. (2001). Measures of learning effectiveness in the SUNY learning network. Proceedings of the 2000 Sloan Summer Workshop on Asynchronous Learning Networks. Sloan-C Press. Siemans, G. (2008, October 10). New structure of learning. Retrieved January 20, 2010, from elearningspace: http://elearnspace.org/Articles/systemic_impact.htm Slavin, R. E. (1990). Co-operative Learning: Theory, research and practice. New Jersey: Prentice Hall. Swan, K. (2006). Threaded discussion. Digital Commons for Education (ODCE) 2006 Conference. Ohio. Swan, K., Schenker, J., Arnold, S., & Kuo, C. (2007 1(18)). Shaping online discussion: assessment matters. E-Mentor. Swan, K., Shen, J., & Hiltz, S. (2006). Assessment and collaboration in online learning. Journal of Asynchronous, 45-62. US Department of Education. (2009). Evaluation of evidence-based practices in online learning: A meta-analysis and review of online learning studies. Washington DC: Published online only. Yang, Y. C., T, N., & Bill, R. (2008). Faciliatating interactions through structured web-based bulleton boards. Computers & Education, 1572-1585.

Page | 44

You might also like