You are on page 1of 11

Computers & Education 55 (2010) 218–228

Contents lists available at ScienceDirect

Computers & Education


journal homepage: www.elsevier.com/locate/compedu

Learning motivation in e-learning facilitated computer programming courses


Kris M.Y. Law a,b,*, Victor C.S. Lee c, Y.T. Yu c
a
Department of Industrial Systems and Engineering, The Hong Kong Polytechnic University, Hunghom, Hong Kong
b
Graduate Institute of Industrial Engineering, National Taiwan University, Taipei, Taiwan
c
Department of Computer Science, City University of Hong Kong, Kowloon Tong, Hong Kong

a r t i c l e i n f o a b s t r a c t

Article history: Computer programming skills constitute one of the core competencies that graduates from many disci-
Received 18 August 2009 plines, such as engineering and computer science, are expected to possess. Developing good program-
Received in revised form 11 December 2009 ming skills typically requires students to do a lot of practice, which cannot sustain unless they are
Accepted 11 January 2010
adequately motivated. This paper reports a preliminary study that investigates the key motivating factors
affecting learning among university undergraduate students taking computer programming courses.
These courses are supported by an e-learning system – Programming Assignment aSsessment System
Keywords:
(PASS), which aims at providing an infrastructure and facilitation to students learning computer pro-
Evaluation of CAL systems
Interactive learning environments
gramming. A research model is adopted linking various motivating factors, self-efficacy, as well as the
Pedagogical issues effect due to the e-learning system. Some factors are found to be notably more motivating, namely, ‘indi-
Programming vidual attitude and expectation’, ‘clear direction’, and ‘reward and recognition’. The results also suggest
Programming languages that a well facilitated e-learning setting can enhance learning motivation and self-efficacy.
Ó 2010 Elsevier Ltd. All rights reserved.

1. Introduction

Since the 1990s, the Hong Kong government has embarked on a policy of rapid expansion in higher education to provide the necessary
well-trained workforce for economic development. Computer science and information technology graduates have been in high demand. In
the era of globalization, rapid technology development and knowledge-based economy, educators face the challenges of nurturing grad-
uates so that they are well equipped with advanced technical know-how and core competencies. The challenge of preparing graduates
for a fast-changing work environment calls for the development of an effective learning framework. In this regard, technology is often used
to enhance students’ engagement in learning and their academic achievement (Carle, Jaffee, & Miller, 2009; Roth, Ivanchenko, & Record,
2008; Tan, 2006; Yu, Poon, & Choy, 2006). In addition, student’s learning motivation is also a crucial enabler of the success of learning. Suf-
ficient attention must be paid not only to the course design and the learning context (Govender, in press), but also to what are in the mind
of individual students that motivate their learning process (Jenkins, 2001; Law, Sandnes, Jian, & Huang, 2009; Yin, Law, & Chuah, 2007).

2. Background

Computer programming skills constitute one of the core competencies of a graduate from computer science and, more generally, from
the engineering discipline. Computer programming courses are perceived as uniquely demanding, characterized by the large amount of
exercises that students are expected to practise intensively in order to develop good programming skills and gain experience in debugging
(Lam, Chan, Lee, & Yu, 2008). However, students nowadays will easily lose enthusiasm and interests in learning computer programming,
especially when they experience repetitive failure in practising on their own. The need to improve the teaching and learning of computer
programming thus calls for special attention to the factors affecting students’ learning motivation (Jenkins, 2001).
We believe that the process of learning is dynamic in which knowledge acquisition and sharing are shaped by various factors (Govender,
in press; Lau & Yuen, 2009). In addition to individual differences, learning motivation and efficacy of students can be affected by environ-
mental factors, such as the learning approach, infrastructure and social pressure from learning peers (Law et al., 2009).
The study presented in this paper was undertaken among students taking computer programming courses offered by the Department of
Computer Science, City University of Hong Kong. The courses made use of a Web-based facilitative tool, called Programming Assignment

* Corresponding author. Address: Department of Industrial Systems and Engineering, The Hong Kong Polytechnic University, Hunghom, Hong Kong. Tel.: +852 2766 6598;
fax: +852 2362 5267.
E-mail addresses: mfkris@inet.polyu.edu.hk (K.M.Y. Law), csvlee@cityu.edu.hk (V.C.S. Lee), csytyu@cityu.edu.hk (Y.T. Yu).

0360-1315/$ - see front matter Ó 2010 Elsevier Ltd. All rights reserved.
doi:10.1016/j.compedu.2010.01.007
K.M.Y. Law et al. / Computers & Education 55 (2010) 218–228 219

aSsessment System (PASS), as an integrated e-learning system to support and monitor the teaching and learning activities (Choy et al.,
2008; PASS, 2009; Yu et al., 2006).

2.1. PASS – a facilitative e-learning system

Learning to write computer programs is known to be difficult for many beginners. For decades, researchers have been building auto-
mated e-learning systems to lower the barriers to programming (Ala-Mutka, 2005; Kelleher & Pausch, 2005). For example, Allen, Cart-
wright, and Stoler (2002) built DrJava, a customized program development environment for students to write programs so that they
will not be distracted by the complexity of features found in common integrated development environments. Some researchers developed
algorithm animation systems such as TANGO (Stasko, 1990) and ANIMAL (Röbling & Freisleben, 2002) that guide students to learn the
dynamics of program execution by visually demonstrating how the algorithms work. Many of the above systems were originated from pro-
totypes developed for research work. On the other hand, program submission/assessment systems were originally developed to automate
the process of student program submissions and assessment (Ala-Mutka, 2005). Once adopted, however, these systems were found to be
very useful not only in relieving the administrative burden of instructors, they are also valuable in tracking students’ programming work
and providing prompt feedback to students. As such, these systems have gained broad appeal and are now heavily used in many computer
programming courses in universities worldwide (Ala-Mutka, 2005; Yu et al., 2006).
PASS is a program submission/assessment system first developed in 2004 in City University of Hong Kong, with the primary aim to
assisting beginners in learning programming (Chong & Choy, 2004; Yu et al., 2006). It is now regularly used as an integrated part of many
undergraduate courses related to computer programming. By the end of July 2009, PASS has served more than 4000 students in a total of 30
courses in computer programming, data structures and data mining (PASS, 2009).
PASS is typically used in the following way. First, after teaching certain programming skills, the instructor uploads a programming prob-
lem, together with a set of test cases. The problem may serve as an exercise for practice of the taught skills, or an assignment for summative
assessment. Next, a student reads the problem from the system, writes a program to solve the problem, and submits it to the system. The
system automatically assesses the student’s submitted program by executing it with a set of instructor-prepared test cases, and instantly
returns the test results to the student. If necessary, the student may revise his/her program and re-submit as many times as needed until it
is correct (or until a certain deadline pre-set by the instructor). Through PASS, students receive timely and relevant feedback to facilitate
their learning without the need to wait for a human mentor or to work in class (Lam et al., 2007). Thus, PASS serves as an e-learning infra-
structure that allows students to learn computer programming through a stepwise-reflective approach and a progressive learning cycle
(Chong & Choy, 2004).
In particular, for every test case, PASS compares, character by character, the student program output with the instructor-defined ex-
pected output, and pinpoints the exact position where the two outputs differ (Choy et al., 2008). The output differences, together with
annotations that the instructor may add to the test cases, can serve as useful debugging hints for students, especially for common mistakes
associated with common wrong outputs (Lam et al., 2008). Therefore, students can learn from their mistakes. Moreover, PASS provides a
variety of online information for instructors to monitor students’ performance (Lam et al., 2007).
Since inception, PASS has been generally well received by both instructors and students, as seen from its frequent usage and comments
by students (Yu et al., 2006). For example, some students commented that PASS provided encouragement and improved their confidence
and learning (PASS, 2009):

‘‘It can help me to check my lab exercises by myself. It can encourage me to do all the lab exercises. So it is very useful.”
‘‘I can work more independently and it gives me confidence when I got all correct. Little by little, I build up my own reliance!”
‘‘We can know the bugs immediately; it increases the rate of learning.”

Inspired by the previous qualitative observations and informal feedback, we set out to perform a more systematic and detailed inves-
tigation. The present study seeks to investigate a set of key factors of student learning motivation in a PASS-facilitated setting, as well as the
effect (which we refer to as e-effect in this work) of such an e-learning setting on computer programming learning (Fig. 1).

2.2. Learning and motivation

Learning and motivation are highly complex facets of human behaviour. People do learn from their experiences, while their willingness
to learn is affected by a set of determinants. Relationships between motivating factors and learning have been a prominent research topic in
the field of higher education (Jenkins, 2001; Lynch, 2006). Motivation is believed to be an enabler for learning and academic success (Lin-
nenbrink & Pintrich, 2002; Lynch, 2006). This is more so in the case of learning computer programming, where engagement in frequent
practice would not happen without the sustained motivation to succeed (Jenkins, 2001).

Motivating factors

Facilitative Computer Performance


e-learning programming
infrastructure learning
(PASS)

Fig. 1. Computer programming learning facilitated by PASS.


220 K.M.Y. Law et al. / Computers & Education 55 (2010) 218–228

2.2.1. Factors motivating learning


Motivation can be defined as the extent to which persistent effort is directed toward a goal (Johns, 1996), and learning motivation can be
understood as the extent to which persistent effort a student pays toward learning. Motivation can be determined intrinsically by individ-
uals and externally by sources due to situational variables and environmental factors (Amabile, Hill, Hennessey, & Tighe, 1994; Deci, 1980;
Ryan & Deci, 2000).

2.2.1.1. Intrinsic factors. Intrinsic factors focus on the individuals rather than the environmental setting. The factors generally include indi-
vidual attitude and expectation, goals and emotions.
The determinants of intrinsically motivated behaviour can be broken into three temporally distinct parts: situational contingencies,
motivational and performance processes and outcomes. Each factor can affect the individuals’ experience with the activity and influence
the subsequent intrinsic motivation (Harackiewicz, Abrahams, & Wageman, 1987).

2.2.1.2. Individual attitude and expectation. Intrinsic motivation stems from the direct relationship between the student and the learning
tasks (Dev, 1997). Expectancy theory (Vroom, 1964) suggests that motivation is a multiplicative function of three constructs: expectancy
(people have different expectations and levels of confidence about what they are capable of doing), instrumentality (the perceptions of individuals
whether they will actually get what they desire) and valence (the emotional orientations people hold with respect to outcomes or rewards).

2.2.1.3. Goals and emotions. Personal goals are important in determining performance (Harackiewicz, Barron, Carter, Lehto, & Elliott, 1997;
Harackiewicz, Barron, & Elliott, 1998). Research that focused on several important issues related to the theory of goal setting carried out in
the 1990s by Wofford, Goodwin, and Premack (1992) has established the correlation between intrinsic motivation and commitment to goal
attainment. On the other hand, emotions of people can vary widely, and particularly when considered throughout a long period (Dweck,
Chiu, & Hong, 1995).
Achievement goals reflect the purpose of achievement behaviour in a particular setting. When pursuing mastery goals in a learning sit-
uation, a student’s purpose is to demonstrate competence relative to others (Harackiewicz, Barron, Tauer, & Elliot, 2002).
Our study employs a questionnaire survey methodology that makes it difficult to assess the effect of emotion on the learning of com-
puter programming unless a longitudinal survey throughout at least a semester can be carried out, which is beyond the scope of the present
study. As such, we shall not include the emotion factor for investigation in this study.

2.2.1.4. Extrinsic (environmental) factors. In contrast to intrinsic motivation, extrinsic motivation stems from the environment external to
the learning.

2.2.1.5. Clear direction. Effective learning in higher education is associated with student’s perception of clear direction (Hendry, Lyon, Pros-
ser, & Sze, 2006). Given a clear direction, students may be treated more favourably and they may respond in a more positive way (Stipek,
1996).

2.2.1.6. Reward and recognition. Reinforcement theory, which is one of the key theories within the mainstream of the motivation field,
emphasizes the relationship between behaviour and its consequences (Skinner, 1969). The promise of competence feedback and recogni-
tion implies some degree of external performance evaluation. The anticipation of performance evaluation can affect students’ motivational
orientation and task involvement during task performance and these motivational processes may influence subsequent interest in the task.
The evaluations leading to corresponding rewards and recognitions may therefore influence intrinsic motivation (Harackiewicz et al.,
1987).
Thus, it is believed that proper reward and recognition can be a key motivator of learning (Jenkins, 2001), though there have also been
studies on the negative effects of rewards on intrinsic motivation (Cameron, Banko, & Pierce, 2001).

2.2.1.7. Punishment. Student motivation concerns the reasons or goals that underlie students’ engagement in or disengagement from aca-
demic activities. Skinner’s belief in the use of rewards and punishments to motivate people has become deeply entrenched (Skinner, 1969).
While positive motivation like incentives seems to make sense, people respond to the expectation of punishments, too. Students can be
positively motivated by a proper amount of punishment, yet they may also be de-motivated if too much punishment is applied as the
instrument of motivation.

2.2.1.8. Social pressure and competition. Not surprisingly, social forces such as peer pressure and competition also affect learning (Chan,
Pearson, & Entrekin, 2003; Rassuli & Manzer, 2005; Wellins, Byham, & Wilson, 1991). It has been well documented (Kotnour, 2000; Lee
& Ertmer, 2006; Poell & Van der Krogt, 2003) and extensively studied (Cavaluzzo, 1996; Katzenbach & Smith, 1993; Meyer, 1994; Roberts,
1997; Senge, 1990). Peer-learning among students in higher education is increasingly a meaningful and important topic for research.

2.2.2. Efficacy
Learning efficacy, also called self-efficacy or simply efficacy, refers to what a person believes he or she can do in a particular learning task
(in this study, the learning of computer programming). A large body of literature indicates that self-efficacy is related to academic perfor-
mance (Zimmerman & Kitsantas, 2005). People with a high level of self-efficacy are likely to set high goals and to perform well (Locke &
Latham, 1990). Conceptually, self-efficacy is an important motivational element for successful cross-cultural adjustment. The broad appli-
cation of self-efficacy across various domains of behaviour accounts for its popularity in contemporary motivation research (Graham &
Weiner, 1996). Researchers have reported that students’ self-efficacy beliefs are correlated with other motivation constructs and with stu-
dents’ academic performances and achievement. Constructs in these studies have included attributions, goal setting, modelling, problem
solving, test and domain-specific anxiety, reward contingencies, self-regulation, social comparisons, strategy training, other self-beliefs and
expectancy constructs, and varied academic performances across domains.
K.M.Y. Law et al. / Computers & Education 55 (2010) 218–228 221

Longitudinal studies on the relationships between goals and efficacy and performance on learning had been carried out (Harackiewicz
et al., 2002). The positive relationship between efficacy and performance has previously been demonstrated (Prussia & Kinicki, 1996). Fur-
ther, the mediating roles of efficacy of students towards academic achievements have been established (Bong, 2004; Margolis & McCabe,
2004; Zimmerman & Kitsantas, 2005).
Furthermore, the influence of task orientation and importance of competence evaluation on learning efficacy and motivation has been
verified by previous research works (Covington & Omelich, 1979; Harlow & Cantor, 1995). The effects of rewards and achievement orien-
tation on performance have been justified (Harackiewicz et al., 1987; Nicholls, 1984; Thrash & Elliot, 2002).

3. Research framework

Motivation is an abstract concept that is difficult to measure (Ball, 1977). Some general categories of motivation, such as intrinsic and
extrinsic factors, however, can still be identified and measured (Entwisle, 1998). As outlined in the previous section, many such factors may
have motivated people to learn as individuals. Our research focus here is to study a set of motivating factors that may influence the process
and effectiveness of learning among undergraduate students studying computer programming courses in an e-learning setting. Specifically,
we intend to find answers to the following three research questions (RQs) in this study.
The first research question of this study is:

RQ1: What are the factors that motivate the process of computer programming learning?
We ground on the intrinsic and extrinsic motivations to identify a set of motivating factors in each category to be investigated in this
study (Fig. 2). Recall in Section 2 that intrinsic factors refer to those focusing on the individual dimension, including ‘individual attitude and
expectation’ of outcomes, and setting of ‘challenging goals’. Extrinsic factors refer to those focusing on the environmental setting, including
‘clear direction’, ‘reward and recognition’, ‘punishment’, and ‘social pressure and competition’. This exploratory study seeks to provide
empirical evidence to show which of them are key motivating factors of students taking computer programming courses in an e-learning
setting.
We further seek to characterize the possible links between the key motivating factors and efficacy of students in learning computer pro-
gramming. As mentioned above, previous works have undoubtedly demonstrated a positive relationship between efficacy and perfor-
mance. Therefore, we presume that a student with a high level of efficacy can perform well in learning computer programming. Thus
the second research question is:

RQ2: How strongly do the motivating factors affect computer programming learning?

To answer RQ2, we refine the question and propose that the motivating factors (intrinsic and extrinsic factors) are correlated with stu-
dents’ efficacy. Consequently, we pose the following hypotheses:

H1: Students who value intrinsic factors more importantly show a higher level of efficacy.

Since intrinsic factors include ‘individual attitudes and expectation’, together with the setting of ‘challenging goals’, H1 is further elab-
orated as follows:

H1a: Students who value ‘individual attitudes and expectation’ more importantly show a higher level of efficacy.
H1b: Students who value ‘challenging goals’ more importantly show a higher level of efficacy.
H2: Students who value extrinsic factors more importantly show a higher level of efficacy.

In a way similar to H1, the hypothesis H2 is further elaborated as follows:

H2a: Students who value ‘clear direction’ more importantly show a higher level of efficacy.
H2b: Students who value ‘reward and recognition’ more importantly show a higher level of efficacy.
H2c: Students who value ‘punishment’ more importantly show a higher level of efficacy.

Intrinsic
E-effect
Individual attitude
and expectation
Challenging goals H1
H3

Efficacy
Extrinsic

Clear direction
Reward and
recognition H2
Punishment
Social pressure and
competition

Fig. 2. Research framework.


222 K.M.Y. Law et al. / Computers & Education 55 (2010) 218–228

Table 1
Summary of the mean scores of the measured items.

Constructs and the corresponding measured items Mean Std.


(l) dev.
Individual attitude and expectation LM1 4.47 0.89
I will perform better if I know that good performance in writing programs will benefit me with good grades F1 4.47 4.05
My positive attitude towards learning programming helps me do better F2 4.37 1.04
When I expect to get a high grade, I will be motivated to do better F3 4.54 1.10
The more I expect for a good program, the harder I will work F4 4.47 1.00
Challenging goals LM2 4.00 1.04
When I need to write a program to solve a problem that I have never met, I will be motivated to complete it and gain new learning experience F5 4.17 1.08
through the process
When a programming exercise looks difficult, I will be motivated to perform better F6 3.89 1.16
Challenging programming exercise motivates me to work harder F7 3.94 1.21
Clear direction LM3 4.42 0.88
When I am clear about the aims of a programming exercise, I will be motivated to perform better F8 4.37 1.03
I will perform better when I know specifically what I am going to achieve in a programming exercise F9 4.48 0.98
If I target to get the highest grade in the course, I will be motivated to learn and absorb new knowledge F10 4.41 1.05
Reward and recognition LM4 4.38 0.85
My performance will be further improved when my good performance is appraised positively by others (my classmates or teachers) F12 4.26 0.96
I will be motivated to do better in a programming exercise when appropriate reward (e.g., bonus points and higher marks) is given F13 4.44 1.08
The instructor’s encouragement and good comment on me motivate me to learn F14 4.44 1.04
Punishmenta LM5 3.74 1.09
If proper punishment (e.g., mark deduction) is applied when I made mistakes in my programs, I will be motivated to learn better F16 3.53 1.32
I will make fewer mistakes in writing programs if I know marks may be deducted F17 3.97 1.25
Social pressure and competition LM6 4.00 0.88
Competition with my classmates pushes me to perform better F11 3.97 1.17
The pressure from teacher forces me to learn better and work harder F18 3.75 1.14
When my classmates do better, I will be motivated to learn better to catch up F19 4.32 1.03
The pressure from my classmates pushes me to learn better F20 3.98 1.13
e-Effect (effect due to the e-learning system, PASS) E- 4.00 1.07
EFF
The use of PASS encourages me to learn actively F21 3.99 1.22
I am motivated by using PASS because I can learn more effectively F22 3.97 1.18
I find PASS facilitates my learning in programming effectively F23 4.03 1.15
Efficacy EFFIC 3.80 1.22
I am confident about my programming knowledge F24 3.74 1.27
I am confident that I can apply the programming skills in solving problems F25 3.87 1.27
a
The first item was excluded from the calculation of the mean and standard deviation of Punishment as well as all subsequent analysis after the item’s factor loading value
was found to be low. See Table 3 for the factor loadings.

H2d: Students who value ‘social pressure and competition’ more importantly show a higher level of efficacy.
As mentioned previously, efficacy refers to what a person believes he or she can do in a particular task. It may be interesting to know
whether an e-learning setting can strengthen such a belief and therefore help students achieve better performance in learning computer
programming. So, the third research question is:

RQ3: Does the e-learning setting facilitate computer programming learning?


In answering RQ3, we further hypothesize a positive linkage between efficacy and the perceived effect of the facilitative e-learning sys-
tem (e-effect).

H3: Students at a higher level of efficacy score a higher level of perceived e-effect.

4. Methodology

We conducted a questionnaire survey to collect data from students taking two computer programming courses in a PASS-facilitated
setting. Validated data were then analyzed quantitatively to confirm or disprove the hypotheses and answer the research questions raised
in Section 3.

4.1. Questionnaire design

The questionnaire was evolved from the previous work of Law et al. (2009), which explored the learning motivating factors of engineer-
ing students. In the present study, the questionnaire was further developed in two stages. First, a pilot study was carried out to evaluate the
appropriateness of the questions. These results provided a basis for refinement. It was then reviewed and proof-read by three academic
staff from City University of Hong Kong. The finalized questionnaire comprises two parts. The first part asks for demographic information
such as age and gender. The second part, shown in Table 1, consists of 20 questions (items) which enable the identification of factors that
have positive motivating effect on learning, and five items for ascertaining students’ perceived e-effect and efficacy.
K.M.Y. Law et al. / Computers & Education 55 (2010) 218–228 223

Table 2
Demographic information of the respondents.

Class CS-ProgA CS-ProgB Total


Students’ major Computer science (CS) Engineering (non-CS)
Number of responses (male) 99 155 254
Number of responses (female) 34 77 111
Total number of responses 133 232 365

Table 3
Factor loadings of the motivating factors.

Motivating factors and the corresponding questions (items) Factor


loadings
Individual attitude and expectation
1. I will perform better if I know that good performance in writing programs will benefit me with good grades F1 0.70
2. My positive attitude towards learning programming helps me do better F2 0.65
3. When I expect to get a high grade, I will be motivated to do better F3 0.73
4. The more I expect for a good program, the harder I will work F4 0.70
Challenging goals
5. When I need to write a program to solve a problem that I have never met, I will be motivated to complete it and gain new learning experience F5 0.57
through the process
6. When a programming exercise looks difficult, I will be motivated to perform better F6 0.81
7. Challenging programming exercise motivates me to work harder F7 0.73
Clear direction
8. When I am clear about the aims of a programming exercise, I will be motivated to perform better F8 0.63
9. I will perform better when I know specifically what I am going to achieve in a programming exercise F9 0.65
10. If I target to get the highest grade in the course, I will be motivated to learn and absorb new knowledge F10 0.70
Reward and recognition
11. My performance will be further improved when my good performance is appraised positively by others (my classmates or teachers) F12 0.60
12. I will be motivated to do better in a programming exercise when appropriate reward (e.g., bonus points and higher marks) is given F13 0.59
13. The instructor’s encouragement and good comment on me motivate me to learn F14 0.61
Punishmenta
14. If proper punishment (e.g., mark deduction) is applied when I made mistakes in my programs, I will be motivated to learn better F16 0.64
15. I will make fewer mistakes in writing programs if I know marks may be deducted F17 0.54
Social pressure and competition
16. Competition with my classmates pushes me to perform better F11 0.57
17. The pressure from teacher forces me to learn better and work harder F18 0.50
18. When my classmates do better, I will be motivated to learn better to catch up F19 0.63
19. The pressure from my classmates pushes me to learn better F20 0.70
a
The factor loading value of Item 14 was found to be low. Hence our subsequent analysis has excluded this item.

Individual attitude and expectation – the motivating effect of this factor was measured by four items concerning the student’s attitude
and expectation towards learning.
Challenging goals – the motivating effect of this factor was measured by three items concerning the challenging goals in learning.
Clear direction – the motivating effect of this factor was measured by three items concerning the specified direction in learning.
Reward and recognition – the motivating effect of this factor was measured by three items concerning positive reinforcements such as
reward, appreciation and encouragement.

Punishment – the motivating effect of this factor was measured by three items1 concerning the negative reinforcement due to
punishment.

Social pressure and competition – the motivating effect of this factor was measured by four items concerning the forces of pressure and
competition from peers.
e-Effect – the perceived effect of the e-learning setting was measured by three items concerning the student’s learning motivation
attributed to PASS.
Efficacy – efficacy was measured by two items concerning the student’s confidence on knowledge acquisition and application.
A 6-point Likert scale was adopted, from ‘disagree very much’ to ‘agree very much’. The discerning point was set at 3.5, the middle of the
scale, such that a score higher than 3.5 represents a positive motivating effect on learning.

4.2. Data collection and validation

Table 2 lists the demographic information of the participating students and the number of students in each group. Undergraduate stu-
dents from two classes (namely, CS-ProgA and CS-ProgB) were invited to participate in this study. CS-ProgA was offered to computer sci-
ence (CS) students whereas CS-ProgB was offered to engineering (non-CS) students.

1
One of the three items on ‘Punishment’ was later excluded from our subsequently analysis due to its statistically low factor loading value that threatens its validity. See
Section 4.2.2 for details.
224 K.M.Y. Law et al. / Computers & Education 55 (2010) 218–228

Table 4
Multi-traits matrix.a

Constructs Individual attitude Challenging Clear Reward and Recognition Punishment Social pressure e-Effect Efficacy
and expectation goals direction and Competition
Individual attitude and expectation 0.87 – – – – – – –
Challenging goals 0.68** 0.88 – – – – – –
Clear direction 0.80** 0.67** 0.81 – – – – –
Reward and recognition 0.72** 0.58** 0.72** 0.78 – – – –
Punishment 0.50** 0.45** 0.41** 0.36** 0.66 – – –
Social pressure and competition 0.59** 0.56** 0.55** 0.59** 0.51** 0.80 – –
e-Effect 0.52** 0.50** 0.49** 0.54** 0.42** 0.55** 0.89 –
Efficacy 0.57** 0.66** 0.52** 0.52** 0.42** 0.57** 0.51** 0.89
a
Pearson correlation, listwise, n = 365, 1-tailed. The diagonal figures (in bold italics) are the reliability coefficients of individual constructs. Other figures are the correlation
coefficients of pairs of constructs.
*
p < 0.05.
**
p < 0.01.

Table 5
Summary of results of comparison between sample groups.

Constructs Overall (n = 365) Class Gender


CS-ProgA (n = 132) CS-ProgB (n = 233) Male (n = 254) Female (n = 111)
X s X X X X
Individual attitude and expectation 4.47 0.89 4.57 4.42 4.47 4.47
Challenging goals 4.00 1.04 4.10 3.94 4.09 3.78
Clear direction 4.42 0.88 4.55 4.35 4.44 4.36
Reward and recognition 4.38 0.85 4.50 4.32 4.38 4.38
Punishment 3.74 1.09 3.78 3.71 3.81 3.59
Social pressure and competition 4.00 0.88 4.02 3.98 4.02 3.96
e-Effect 4.00 1.07 4.23 3.87 4.04 3.89
Efficacy 3.80 1.22 3.70 3.85 3.92 3.52

Students were asked to complete the questionnaire during the class time to secure a high response rate. The data were manually entered
to the computer for statistical analysis. Among 386 questionnaires returned, 365 are valid samples, and the rest are invalid samples not
included in our subsequent analysis. Since data obtained from the survey were derived from interval measurements (Likert scale on con-
tinuous basis), arithmetic operations such as taking averages can be used while observations are independent.

4.2.1. Non-response bias


To detect non-response bias, the t-test was conducted to see if there were differences between early respondents and late respondents
in terms of variables relevant to the research hypotheses (Armstrong & Overton, 1977). The mean values of the measured items in the ques-
tionnaires of the first 10% respondents and the last 10% respondents were compared. The results of the t-test show no statistically signif-
icant difference between the values across the two groups of (early versus late) respondents, indicating that non-response bias might not be
a problem in this study.

4.2.2. Reliability and validity


The accuracy of the survey study was verified in terms of validity and reliability. Firstly, the validity of constructs (which include the six
motivating factors as well as e-effect and efficacy) is verified through the oblique rotation exploratory factor analysis. The results are shown
in Table 3. The value of factor loadings verifies the validity of all the constructs, except that one of the items of ‘Punishment’ has to be
dropped out due to low factor loading value. In view of this, our subsequent analysis has excluded the dropped item.
Secondly, to warrant the reliability of the questionnaire, the set of items measuring the same construct should be highly correlated. We
based on the average inter-item correlation (that is, Cronbach Alpha) to test the reliability. SPSS was used to obtain the reliability coeffi-
cient (a) of the survey questions. Alpha values greater than 0.70 are considered statistically significant (Johnson & Wichern, 1998). Since
the value of a (0.95) is close to 1, we believe that a high level of internal reliability of the questionnaire has been obtained.
Lastly, the discriminant validity of each construct is checked using a multi-trait matrix presented in Table 4. The figures (in bold italics)
in the diagonal of the matrix are the reliability coefficients of individual constructs. Other figures are the correlation coefficients of pairs of
constructs. We observe that, in each column, the reliability coefficient of each construct is larger than the correlation coefficients of all pairs
of this construct with others. This observation indicates that the internal reliability of an individual construct is higher than the inter-con-
struct reliability (Churchill, 1979), which, in turn, shows strong empirical support for discriminant validity.

5. Results and findings

A summary of results containing the overall mean scores of individual constructs and the mean scores in different groupings are pre-
sented in Table 5.

5.1. Motivating effect of factors

Recall that our first research question RQ1 is: What are the factors that motivate the process of computer programming learning?
K.M.Y. Law et al. / Computers & Education 55 (2010) 218–228 225

Table 6
Results from the t-testa.

Factors t-Value (df = 364) Significance value Score


Mean Std. dev.
Individual attitude and expectation 21.29 0.00 4.47 0.89
Challenging goals 9.48 0.00 4.00 1.04
Clear direction 20.63 0.00 4.42 0.88
Reward and recognition 20.14 0.00 4.38 0.85
Punishment 4.51 0.00 3.74 1.09
Social pressure and competition 11.12 0.00 4.00 0.88
a
Discerning point (test value) for mean score = 3.5. Critical value for t-value = 1.96.

Table 7
Summary of the stepwise regression model.a,b

Model R square Standard error of the estimate Change statistics significance value
R square change F
1 0.503 0.855 128.9 0.000
a
Dependent variable: efficacy.
b
Entering variables: individual attitude and expectation, challenging goals, social pressure and competition.

Table 8
Coefficients of the independent variables.a

Un-standardized coefficients Standardized coefficients (b) t-Value Significance value


B Std. error
Constant 0.381 0.238 1.60 0.110
Individual attitude and expectation 0.166 0.071 0.122 2.345 0.020
Challenging goals 0.504 0.060 0.429 8.433 0.000
Social pressure and competition 0.357 0.063 0.262 5.672 0.000
a
Dependent variable: efficacy.

To answer RQ1, a t-test is carried out to evaluate the strength of each factor’s (positive) motivating effect on learning. Parametric tech-
niques are used to test the hypotheses (Cooper & Schindler, 2003). We postulate the null hypothesis for each identified factor as follows:

H0: The identified factor is not a significant motivating factor on computer programming learning.

As the discerning point was set apriori at the middle of the 6-point Likert scale (that is, the test value is 3.5), the above null hypothesis is
equivalent to H0: l 6 3.5, where l is the mean score of the responses. The significance level is set at a = 0.05. The degree of freedom, df, of
this data set is n 1 = 364, where n is the number of samples. From the t-table, the critical value at 95% confidence interval and df = 364 is
1.96. The t-value of each factor provides indication of its motivating effect on learning. If the t-value is greater than the critical value 1.96,
we reject H0. Otherwise, we do not reject H0. The results of the t-test are summarized in Table 6.
It can be seen that all of the identified factors, both intrinsic and extrinsic, have strong positive motivating effect on learning. In particular,
‘individual attitude and expectation’, ‘clear direction’ and ‘reward and recognition’ have the greatest motivating effect, while ‘punishment’
has the least. Among the three key motivating factors, the intrinsic factor ‘individual attitude and expectation’ is the most recognized.

5.2. Linkage between efficacy and students’ values on motivating factors

Tables 7 and 8 show the summary of the stepwise regression model. The significant value of F at 0.001 level and the value R square
(0.503) in Table 7 show that the variation in this model accounts for a significant variance in efficacy. The results show that, among the
six motivating factors, only ‘individual attitude and expectation’, ‘challenging goals’ and ‘social pressure and competition’ are significantly
correlated with efficacy at 0.01 levels, as shown in Table 8. This implies that a change in these factors will almost certainly influence effi-
cacy, and vice versa. It can also be observed that the results agree with the fairly large (that is, between 0.52 and 0.66) correlation coef-
ficients of efficacy with these three factors shown in the last row of Table 4.
On the whole, the results have painted a clearer picture for RQ2: How strongly do the motivating factors affect computer programming
learning?
The significant and positive relationships between the two intrinsic factors (‘individual attitude and expectation’ and ‘challenging
goals’) and efficacy verify H1, that is, students who value intrinsic factors more importantly show a higher level of efficacy. On the other
hand, not all the extrinsic factors exhibit such a relationship. So, H2 can only be partially verified. Specifically, only students who value
‘social pressure and competition’ more importantly show a higher level of efficacy.

5.3. Effect of study under the e-learning setting

To assess the e-effect, that is, the effectiveness of the e-learning system we have put in use, students were asked to rate if they agree that
PASS encourages, motivates, and facilitates their learning. The mean scores (l) of each item are presented in Appendix C. Students
226 K.M.Y. Law et al. / Computers & Education 55 (2010) 218–228

Table 9
Comparing the mean scores between the two classes by independent sample t-test.

Constructs t-Value Significance value Significant difference between


the two classes (CS-ProgA and CS-ProgB)?
Individual attitude and expectation 1.60 0.11 No
Challenging goals 1.33 0.18 No
Clear direction 2.11 0.04 Yes
Reward and recognition 1.96 0.05 Yes
Punishment 0.44 0.65 No
Social pressure and competition 0.55 0.58 No
e-Effect 3.18 0.00 Yes
Efficacy 1.14 0.25 No

Table 10
Comparing the mean scores between male and female students by independent sample t-test.

Constructs t-Value Significance value Significant difference between


male and female respondents?
Individual attitude and expectation 0.00 0.99 No
challenging goals 2.66 0.01 Yes
Clear direction 0.79 0.43 No
Reward and recognition 0.01 0.99 No
Punishment 1.76 0.08 No
Social pressure and competition 0.61 0.54 No
e-Effect 1.21 0.23 No
Efficacy 2.94 0.00 Yes

generally agreed that PASS did encourage (l = 3.99) and motivate (l = 3.97) them to learn, and they also found PASS did facilitate their
learning effectively (l = 4.03). Since all the mean scores are greater than the discerning point (3.5), the items on e-effect are significantly
positive. Therefore, an affirmative answer is obtained for RQ3: Does the e-learning setting facilitate computer programming learning?
Furthermore, a significant and fair correlation is observed between efficacy and perceived e-effect, with the Pearson correlation coeffi-
cient = 0.51 and the corresponding p-value < 0.01 (see the e-effect column of the last row in Table 4). Hence the hypothesis H3 is confirmed,
that is, students at a higher level of efficacy score a higher level of perceived effect on their learning.

5.4. Differences between sample groups

Apart from the three research questions about all the participating students, we also tried to observe any differences between the sam-
ple groups. An independent sample t-test was used to compare the mean scores, X; of constructs between sample groups from different
classes (CS-ProgA and CS-ProgB). The results of the t-test are presented in Table 9.
In Table 9, the small significance values (60.05) of ‘clear direction’, ‘reward and recognition’ and ‘e-effect’ suggest that there is signif-
icant difference in these aspects between the two classes (CS-ProgA and CS-ProgB), that is, between CS students and non-CS students. It is
interesting to note that the e-effect rated by CS students is significantly higher than that by non-CS students (significance value = 0.00). The
mean scores, X, of e-effect rated by CS students and non-CS students are 4.23 and 3.87, respectively (Table 5). The significant difference
between the two groups is probably attributed to the difference in background of the students.
A similar t-test was used to compare the mean scores of constructs between male and female students, and the results are shown in Table
10. It is interesting to note that there are significant differences between the two groups of students regarding ‘challenging goals’ and effi-
cacy. Male students are apparently more motivated by challenges, and they also show a higher level of efficacy than female students.

6. Discussions

6.1. Summary and pedagogical insights

Let us summarize the important observations we have made from the results of this study. Firstly, among the six identified factors,
‘individual attitude and expectation’, ‘clear direction’, and ‘reward and recognition’ have the greatest motivating effect on learning.
Secondly, three motivating factors, namely, ‘individual attitude and expectation’, ‘challenging goals’, and ‘social pressure and competition’
have a significant and positive relationship with efficacy. Thirdly, we observe that our facilitated e-learning setting, PASS, is instrumental in
enhancing students’ efficacy. The results agree well with the previous study by Law et al. (2009), which suggested that a supportive setting
should involve pulling forces like reward, expectation and clear goals where social pressure is expected.
The intrinsic factor ‘individual attitude and expectation’ seems to stand out as both strongly motivating and highly correlated with
efficacy. Moreover, both the intrinsic factors under study, namely, ‘individual attitude and expectation’ and ‘challenging goals’, are visibly
correlated with efficacy, whereas for extrinsic factors, only ‘social pressure and competition’ is seen to be significantly so. With this knowl-
edge, a challenge to the educator is how the teaching and learning activities can be organized to effectively reinforce these motivating
factors for the benefit of enhancing students’ learning efficacy.
As regards to the effect of PASS, our third observation confirms, to our gratification, that it is performing well in its facilitative role of
encouraging and motivating students to learn effectively. Besides, our first two observations have provided plausible explanations and
insights of how PASS could be (and, in retrospect, have actually been) employed by course instructors to strengthen the key motivating
factors.
K.M.Y. Law et al. / Computers & Education 55 (2010) 218–228 227

Consider, for instance, the way that PASS has been used in lab sessions, in which students were required to work on programming prob-
lems in class. PASS allows the course instructor to monitor, in real time, the testing/submissions of each student’s program (Lam et al.,
2007). Once the instructor found that a student has submitted a good solution to the programming problem, he/she would demonstrate
to the whole class how the submitted solution worked, what features the program possessed to qualify it as a high quality solution,
and that such a program would achieve high grades in assessment. Students were thus led to expect that good programs from themselves,
as well as the subsequent high grades, were actually achievable. (Note that it makes a subtle difference that the programs were actually
written by some students on the spot and not by the instructor nor prepared in advance.) According to the present study, this kind of expec-
tation and self-improvement attitude of the student is a strong motivator of learning, and is highly correlated to students’ efficacy. The
teaching activity also somewhat promoted a light atmosphere of ‘peer pressure and competition’ among students to perform well so that
their own programs would be showcased in class as a tangible way of recognition of their efforts. We consider that this kind of teaching
activity would be difficult, if not impossible, to carry out in class without the facilitation of PASS.
Furthermore, this study has illuminated several exciting opportunities for possible enhancement to PASS so as to further capitalize on its
capability to strengthen the learning motivators. One way we have conceived is to build in an element of ‘peer pressure and competition’ to
the system by providing real-time information to students of not only their own progress (which they currently can view in PASS), but also
their peers’. The information could be the number of test cases that other students’ programs have already passed, or the number of stu-
dents who have already completed the programming exercise. In this way, students may be more enthused to compete with one another in
achieving better performance. This kind of informal ‘contest’ can be made even more tangible by properly awarding students’ effort and
outcome, such as bonus scores for the earliest or best quality solution, or to students who made significant improvement from their pre-
vious solution, and so on. Further work is needed to fine tune the ‘definitions’ and ‘rules’ of the ‘game’ to the best effect.
Another way is to enhance PASS so that students may submit the test cases they themselves construct to test their own programs. These
test cases may in turn be used by PASS to test other students’ programs. This would provide ‘challenging goals’ to the more capable stu-
dents who would like to pass not only the test cases provided by the instructor, but also those by their peers. Although the idea of requiring
students to submit test cases along with their programs is not new, yet it was originally adopted not explicitly as a means to enhancing
students’ learning motivation, but had been inspired by the desire to provide concrete feedback to students for improvement and the need
to teach software testing skills (Edwards, 2003). Incidentally, this study provides further ground and explanations to why the approach did
result in better learning efficacy, in terms of students’ increased confidence, as well as quantitative evidence of improvement, in their pro-
gramming ability, as reported in Edwards (2003).
Furthermore, the instructor can provide optional programming exercises in the repository of PASS, rated with difficulty level (Lam et al.,
2007), so that students can set their own ‘challenging goals’ in addition to the required practice work. For instance, Astrachan (2004) has
reported his successful attempt in motivating students’ interest by requiring them to work on some programming problems that were orig-
inally designed for programming contests.
In summary, there are plentiful ways to provide more ‘reward and recognition’ as the results of ‘social pressure and competition’ or
achievement of ‘challenging goals’. Again, such opportunities are hardly possible without an automated system. Indeed, as one reflects fur-
ther on the implications of this study, one could come up with an even longer list of possibilities for improving the current teaching and
learning practice and enhancing the facilitative e-learning system, with opportunities constrained only by our creativity and resources.

6.2. Limitations

This study represents a step forward in investigating the key factors affecting learning among our undergraduate students taking com-
puter programming courses, supported by a facilitative e-learning system. As discussed above, the results of the study have already been
very informative and insightful to the teaching practices of our classes under study. However, to establish more generalized results, a larger
scale study would be needed. For instance, it remains unclear to what extent these findings can be generalized to other classes, other cohort
of students, or other universities. Future efforts to replicate these findings in different settings can address this concern.

7. Conclusions

There are few accounts of pedagogical frameworks that incorporate active use of e-learning setting in computer programming courses.
Due to the uniquely demanding requirement for learning computer programming, we believe that it is important for educators teaching
these courses to empirically and systematically identify the set of factors that motivate the learning of their students. In particular, it is
useful to find out whether and how an e-learning setting is helpful to students in enhancing their efficacy.
Notwithstanding its limitations, our work represents an important initiative to understand the key factors affecting student learning
motivation in computer programming courses. Moreover, our study provides evidence that a well facilitated e-learning setting can indeed
enhance learning motivation and student efficacy. More importantly, the results from this study have provided insights to educators who
are keen on using technology in their teaching. Although we cannot claim at this point that we have already obtained a full picture of how
an effective teaching and learning framework can be developed, yet the findings from this study are invaluable for further improvement of
teaching and learning strategies as well as courseware development.

References

Ala-Mutka, K. M. (2005). A survey of automated assessment approaches for programming assignments. Computer Science Education, 15(2), 83–102.
Allen, E., Cartwright, R., & Stoler, B. (2002). DrJava: A lightweight pedagogic environment for Java. In Proceedings of the 33rd SIGCSE technical symposium on computer science
education (pp. 137–141).
Amabile, T. M., Hill, K. G., Hennessey, B. A., & Tighe, E. M. (1994). The work preference inventory: Assessing intrinsic and extrinsic motivational orientations. Journal of
Personality and Social Psychology, 66(5), 950–967.
Armstrong, J. S., & Overton, T. S. (1977). Estimating non-response bias in mail surveys. Journal of Marketing Research, 16, 396–400.
Ball, S. (1977). Motivation in education. Academic Press.
Astrachan, O. L. (2004). Non-competitive programming contest problems as the basis for just-in-time teaching. In Proceedings of 34th ASEE/IEEE frontiers in education
conference (pp. T3H-20–24).
228 K.M.Y. Law et al. / Computers & Education 55 (2010) 218–228

Bong, M. (2004). Academic motivation in self-efficacy, task value, achievement goal orientations, and attributional beliefs. The Journal of Educational Research, 97(6), 287–298.
Cameron, J., Banko, K. M., & Pierce, W. D. (2001). Pervasive negative effects of rewards on intrinsic motivation: The myth continues. The Behavior Analyst, 24, 1–44.
Carle, A. C., Jaffee, D., & Miller, D. (2009). Engaging college science students and changing academic achievement with technology: A quasi-experimental preliminary
investigation. Computers & Education, 52, 376–380.
Cavaluzzo, L. (1996). Enhancing team performance. The Healthcare Forum Journal, 39(5), 57–59.
Chan, C. C. A., Pearson, C., & Entrekin, L. (2003). Examining the effects of internal and external team learning on team performance. Team Performance Management: An
International Journal, 9(7/8), 174–181.
Chong, S. L., & Choy, M. (2004). Towards a progressive learning environment for programming courses. In New horizon in web-based learning: Proceedings of the 3rd
international conference web-based learning (pp. 200–205).
Choy, M., Lam, S., Poon, C. K., Wang, F. L., Yu, Y. T., & Yuen, L. (2008). Design and implementation of an automated system for assessment of computer programming
assignments. Advances in web-based learning. Berlin/Heidelberg: Springer (pp. 584–596).
Churchill, G. A. (1979). A paradigm for developing better measures of marketing constructs. Journal of Marketing Research, 16, 64–73.
Cooper, D. R., & Schindler, P. S. (2003). Business research methods (8th ed.). McGraw-Hill.
Covington, M. V., & Omelich, C. E. (1979). Are causal attributions causal? A path analysis of the cognitive model of achievement motivation. Journal of Personality and Social
Psychology, 37, 1487–1504.
Deci, E. L. (1980). The psychology of self-determination. Lexington, Mass: Lexington Books.
Dev, P. C. (1997). Intrinsic motivation and academic achievement: What does their relationship imply for the classroom teacher? Remedial and Special Education, 18(1), 12–19.
Dweck, C. S., Chiu, C., & Hong, Y. (1995). Implicit theories and their role in judgments and reactions: A world from two perspectives. Psychological Inquiry, 6, 267–285.
Edwards, S. H. (2003). Improving student performance by evaluating how well students test their own programs. Journal on Educational Resources in Computing, 3(3).
Entwisle, N. (1998). Motivation and approaches to learning: Motivating and conceptions of teaching. In Brown et al. (Eds.), Motivating students. Kogan Page.
Govender, I. (in press). The learning context: Influence on learning to program. Computers & Education.
Graham, S., & Weiner, B. (1996). Theories and principles of motivation. In D. C. Berliner & R. C. Calfee (Eds.), Handbook of educational psychology (pp. 63–84). New York: Simon
& Schuster Macmillan.
Harackiewicz, J. M., Abrahams, S., & Wageman, R. (1987). Performance evaluation and intrinsic motivation: The effects of evaluative focus, rewards, and achievement
orientation. Journal of Personality and Social Psychology, 53, 1015–1023.
Harackiewicz, J. M., Barron, K. E., Carter, S. M., Lehto, A. T., & Elliott, A. J. (1997). Predictors and consequences of achievement goals in the college classroom: Maintaining
interest and making the grade. Journal of Personality and Social Psychology, 73(6), 1284–1295.
Harackiewicz, J. M., Barron, K. E., & Elliott, A. J. (1998). Rethinking achievement goals: When are they adaptive for college students and why? Educational Psychologist, 33(1),
1–21.
Harackiewicz, J. M., Barron, K. E., Tauer, J. M., & Elliot, A. J. (2002). Predicting success in college: A longitudinal study of achievement goals and ability measures as predictors of
interest and performance from freshman year through graduation. Journal of Educational Psychology, 94(3), 562–575.
Harlow, R. E., & Cantor, N. (1995). To whom do people turn when things go poorly? Task orientation and functional social contacts. Journal of Personality and Social Psychology,
69, 329–340.
Hendry, G. D., Lyon, P. M., Prosser, M., & Sze, D. (2006). Conceptions of problem-based learning: The perspectives of students entering a problem-based medical program.
Medical Teacher, 28(6), 573–575.
Jenkins, T. (2001). The motivation of students of programming. In Proceedings of ITiCSE 2001: The 6th annual conference on innovation and technology in computer science
education (pp. 53–56).
Johns, G. (1996). Organizational behaviour: Understanding and managing life at work (4th ed.). New York: HarperCollins.
Johnson, R. A., & Wichern, D. W. (1998). Applied multivariate statistical analysis. USA: Prentice-Hall International Inc..
Katzenbach, J. R., & Smith, D. K. (1993). The discipline of teams. Harvard Business Review(March/April), 111–120.
Kelleher, C., & Pausch, R. (2005). Lowering the barriers to programming: A taxonomy of programming environments and languages for novice programmers. ACM Computing
Surveys, 37(2), 83–137.
Kotnour, T. (2000). Organizational learning practices in the project management environment. The International Journal of Quality and Reliability Management, 17(4/5),
393–406.
Lam, M. S. W., Chan, E. Y. K., Lee, V. C. S., & Yu, Y. T. (2008). Designing an automatic debugging assistant for improving the learning of computer programming. Lecture Notes in
Computer Science, 5169, 359–370.
Lam, S., Yuen, L., Choy, M. Y., Poon, C. K., Wang, F. L., & Yu, Y. T. (2007). A Web-based system for supporting feedback-enabled teaching and learning of computer programming.
In Proceedings of ICT 2007: The 2007 international conference on ICT in teaching and learning (pp. 474–487).
Lau, W. W. F., & Yuen, A. H. K. (2009). Exploring the effects of gender and learning styles on computer programming performance: Implications for programming pedagogy.
British Journal of Educational Technology, 40(4), 696–712.
Law, K. M. Y., Sandnes, F. E., Jian, H., & Huang, Y. (2009). A comparative study of learning motivation among engineering students in South East Asia and beyond. International
Journal of Engineering Education, 25(1), 144–151.
Lee, Y., & Ertmer, P. A. (2006). Examining the effect of small group discussions and question prompts on vicarious learning outcomes. Journal of Research on Technology in
Education, 39(1), 66–80.
Linnenbrink, E. A., & Pintrich, P. R. (2002). Motivation as an enabler for academic success. School Psychology Review, 31(3), 313–327.
Locke, E. A., & Latham, G. P. (1990). A theory of goal setting and task performance. Englewood Cliffs, NJ: Prentice Hall.
Lynch, D. J. (2006). Motivational factors, learning strategies and resources management as predictors of course grades. College Student Journal, 40(2), 423–428.
Margolis, H., & McCabe, P. P. (2004). Self-efficacy, a key to improving the motivation of struggling learners. The Clearing House, 77(6), 241–249.
Meyer, M. A. (1994). The dynamics of learning with team production implications for task assignment. Quarterly Journal of Economics, 109(4), 1157–1184.
Nicholls, J. G. (1984). Achievement motivation: Conceptions of ability, subjective experience, task choice, and performance. Psychological Review, 91, 328–346.
PASS (2009). Programming Assignment aSsessment System. Department of Computer Science, City University of Hong Kong. <http://www.cs.cityu.edu.hk/~passweb/>. Last
Accessed 02.12.09.
Poell, R. F., & Van der Krogt, F. J. (2003). Project-based learning in organizations: Towards a methodology for learning in groups. Journal of Workplace Learning, 15(5), 217–228.
Prussia, G. E., & Kinicki, A. J. (1996). A motivational investigation of group effectiveness using social-cognitive theory. Journal of Applied Psychology, 81(2), 187–198.
Rassuli, A., & Manzer, J. P. (2005). Teach to learn: Multivariate analysis of perception of success in team learning. Journal of Education for Business, 81(1), 21–27.
Roberts, E. (1997). Team training: When is enough. . .enough? The Journal for Quality and Participation, 39(5), 16–20.
Roth, V., Ivanchenko, V., & Record, N. (2008). Evaluating student response to WeBWorK, a web-based homework delivery and grading system. Computers & Education, 50,
1462–1482.
Röbling, G., & Freisleben, B. (2002). ANIMAL: A system for supporting multiple roles in algorithm animation. Journal of Visual Languages and Visualization, 13, 341–354.
Ryan, R. M., & Deci, E. L. (2000). Self-determination theory and the facilitation of intrinsic motivation, social development, and well-being. American Psychologist, 55, 68–78.
Senge, P. (1990). The fifth discipline: The art and practice of the learning organization. New York: Doubleday.
Skinner, B. F. (1969). Contingencies of reinforcement: A theoretical analysis. Englewood Cliffs, NJ: Prentice-Hall.
Stasko, J. (1990). TANGO: A framework and system for algorithm animation. Computer, 23, 27–39.
Stipek, D. J. (1996). Motivation and instruction. In D. C. Berliner & R. C. Calfee (Eds.), Handbook of educational psychology. New York: Macmillan.
Tan, O. (2006). Development of a thinking programme for engineering students. Innovations in Education and Teaching International, 43(3), 245–259.
Thrash, T. M., & Elliot, A. J. (2002). Implicit and self-attributed achievement motives: Concordance and predictive validity. Journal of Personality, 70, 729–755.
Vroom, V. (1964). Work and motivation. New York: John Wiley & Sons.
Wellins, R. S., Byham, W. C., & Wilson, J. M. (1991). Empowered teams: Creating self-directed work groups that improve quality, productivity, and participation. San Francisco, CA:
Jossey-Bass Publishers.
Wofford, J. C., Goodwin, V. L., & Premack, S. (1992). Meta-analysis of the antecedents of personal goal level. Journal of Management, 18(3), 595–615.
Yin, Y. T., Law, M. Y. K., & Chuah, K. B. (2007). Investigation of the motivating factors influencing team learning. In Proceedings of the 5th international symposium on
management of technology (CD Rom).
Yu, Y. T., Poon, C. K., & Choy, M. (2006). Experiences with PASS: Developing and using a Programming Assignment aSsessment System. In Proceedings of QSIC 2006: The 6th
international conference on quality software (pp. 360–365).
Zimmerman, B. J., & Kitsantas, A. (2005). Homework practices and academic achievement: The mediating role of self-efficacy and perceived responsibility beliefs.
Contemporary Education Psychology, 30, 397–417.

You might also like