You are on page 1of 5

A Tutoring System for Parameter Passing

in Programming Languages

Harsh Shah, Amruth N. Kumar
Ramapo College of New Jersey
505, Ramapo Valley Road
Mahwah, NJ, USA, 07430-1680
(201) 684 7712
amruth@ramapo.edu




ABSTRACT
We have developed a tutoring system for the parameter passing
mechanisms discussed in a typical Comparative Programming
Languages course, viz., value, result, value-result, reference and
name. The tutor helps students better understand these parameter
passing mechanisms by administering problems for them to solve
and providing instant feedback on their solution. In this paper, we
will describe the design and features of the tutor. We will also
discuss a test that we conducted to evaluate the effectiveness of
using the tutor, and present its results. The test confirmed our
hypothesis that using the tutor would result in a systematic
improvement in the learning of our students. This tutor may be
used in the Comparative Programming Languages course as well
as Computer Science I.
Categories and Subject Descriptors
K.3.1 [Computing Milieux]: Computer uses in Education -
Computer-Assisted Instruction
General Terms
Experimentation, Languages.
Keywords
Problem-Solving, Online Learning, Active Learning, Parameter
Passing Mechanisms in Programming Languages, Web-Based
Tutors, Evaluating Educational Software
1. INTRODUCTION
Parameter Passing mechanisms are an important part of the study
of programming languages. In a typical Comparative
Programming Languages course, five different parameter passing
mechanisms are discussed: value, result, value-result, reference
and name. At least a few of these mechanisms would be new to
every student, since no programming language includes all five
parameter passing mechanisms. Our students, having studied C++
in their introductory courses, find value-result to be a perplexing
alternative to reference, and are stymied by the complexities of
parameter passing by result.

The presence of global variables, aliasing, array elements, and
expressions in array subscripts serve to highlight, and at the same
time, complicate the comparison of these parameter passing
mechanisms. Specific experience is required in order to recognize
general principles [10]. Problems are useful to provide specific
experience, and problem-solving is known to improve learning
[8]. Therefore, we set out to develop a tutoring system that would
help students learn parameter passing concepts through problem-
solving.
In this paper, we will describe the tutoring system that we have
developed and the results we obtained from evaluating its
usefulness. We are currently using this tutoring system in our
Comparative Programming Languages course. In the future, we
also plan to use it in Computer Science I to help students better
understand parameter passing by value and reference.
In Section 2, we will describe the design and features of our
tutoring system. In Section 3, we will discuss the design of our
test to evaluate the tutor and the results we obtained. In Section 4,
we will discuss how our work relates to the work of others in this
area. We will discuss future work in Section 5.
2. THE DESIGN OF THE TUTOR
Our tutoring system is designed to help students learn parameter
passing mechanisms by repeatedly solving problems and
obtaining feedback on their solution. The capabilities of the tutor
include:
1. Problem Generation: It can automatically generate
problems, and is capable of generating an unlimited number
of problems.
2. Grading and Feedback: It can grade the users answer and
provide detailed feedback about the correct answer.
3. Statistics: It can keep track of the users progress: how many
problems the user has attempted, solved correctly and
incorrectly.
The tutor is designed to be used as a supplement to classroom
instruction, either in a closed laboratory or for after-class
assignments.

Permission to make digital or hard copies of all or part of this work for
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that
copies bear this notice and the full citation on the first page. To copy
otherwise, or republish, to post on servers or to redistribute to lists,
requires prior specific permission and/or a fee.
ITiCSE02, June 24-26, 2002, Aarhus, Denmark.
Copyright 2002 ACM 1-58113-499-1/02/0006$5.00.



170
2.1 Problem Generation
The tutor generates complete programs, consisting of a calling
function, a called function, and optional global variables and
arrays. It generates these programs as parameterized instances of
predefined problem templates, specified in pseudo-BNF notation.
For example, consider the template in Figure 1. In the template, T
refers to a data type, R# a random number, A an array, P a
function, P0 being main(), F a formal parameter, G a global
variable, and L a local variable. Array subscripts are enclosed in
square brackets [], blocks are delimited by braces {}, and function
parameters are enclosed in parentheses (). This template
highlights the difference between pass by reference, when aliasing
is involved, and pass by value-result: the global variable and
formal parameter work independently in the case of parameter
passing by value-result and in unison in the case of parameter
passing by reference.
<T1><G1>=<R#>;
<T1><A1>=<R#>;
<P0>(){
<T1><L1>=<R#>;
<P1>(<G1>,<A1>[<L1>]);
}
<P1>(<T1><F1>,<T1><F2>){
<G1>=<G1>+<R#>;
<A1>[<G1>]=<R#>;
<A1>[<F1>]=<R#>;

<F1>=<F1>+<R#>;
<A1>[<F1>]=<R#>;
<F2>=<R#>;
}
Figure 1: A template to illustrate Pass By Reference Versus
Pass By Value-Result
The tutor randomly selects one of several such templates and
generates a program based on the template. It randomly selects
names for variables and functions, the data type of variables,
values of random numbers, etc. Therefore, the tutor is capable of
generating a combinatorially explosive number of unidentical
problems based on the predefined templates, and a user may never
see the same problem twice. In order to minimize the cognitive
load for the learner, the tutor uses the same (C++) syntax for all
the parameter passing mechanisms.

2.2 Feedback
The tutor can currently provide feedback at two levels:
Minimal Feedback: whether the users answer is correct or
not.
Detailed feedback: In addition to whether the users answer
is correct/incorrect, the tutor can explain the correct answer
by describing the behavior of the program as it is executed.
According to traditional Intelligent Tutoring Systems
literature, this is demand feedback [1], i.e., it is provided
only when the user asks for it.

Demand Feedback: The feedback is provided at four stages of
execution: before the function call, during the function call,
during the function execution and during the function return.
Before the function call: In the case of value, value-result,
name and reference mechanisms, the tutor lists the initial
values of the variables used as actual parameters. In the case
of result, this is done only if the actual parameters are
evaluated at the time of the function call.
During the function call: In the case of value and value-
result mechanisms, the tutor identifies the formal parameters
into which values of actual parameters are copied. In the case
of reference mechanism, the tutor highlights the aliasing
between actual and formal parameters.
During the function execution: The tutor explains the
execution of the called function line by line, indicating at
each step, the values of any variables that are changed in that
step. In the case of parameter passing by name though, the
tutor first rewrites the body of the called function, replacing
each instance of a formal parameter by its corresponding
actual parameter.
During the function return: In the case of value-result, the
tutor indicates how the values of formal parameters are
copied back into the corresponding actual parameters. In the
case of result mechanism, the tutor tailors its feedback based
on whether the actual parameters are evaluated at the time of
function call or function return. In both the parameter
passing mechanisms, it takes into account whether the actual
parameters are evaluated left to right or right to left,
especially when a variable is used as an actual parameter
more than once.
2.3 User Interface
Figure 2 (included at the end of the paper) shows the user
interface of the tutor. The user is led through a clockwise flow of
action: from the program in the top left panel (1), to the problem
statement and controls to input users answers in the right panel
(2), followed by the Check My Answer button at the bottom
right (3), the feedback in the bottom left panel (4), and finally, the
Create New Problem button in the left center (5).
The tutor makes the Check My Answer and Create New
Problem buttons available only in their correct contexts to avoid
confusion. In its feedback, the tutor refers to the line numbers
printed alongside the code in the top left panel. For the
convenience of the user, the tutor reminds the user of variable
initializations at the top in the right panel.
3. EFFECTIVENESS OF USING THE
TUTOR
In the past, we have reported the results of controlled tests to
compare using our tutors versus using printed workbooks to help
students learn by solving problems. These tests were conducted
with other tutors that we had developed, whose design and
objectives were similar. We found that the tutors were at least as
effective as printed workbooks in helping students learn the
material. Often, the improvement using our tutors was twice as
much as the improvement using printed workbooks to practice
problem-solving [12,13].
We set out to test two hypotheses with our tutor on parameter
passing:
171
1. Whether using the tutor would result in a systematic
improvement in the learning of our students;
2. Whether we could observe a difference in student learning
because of detailed versus minimal feedback.
We conducted a controlled test in our Comparative Programming
Languages course. In this section, we will first describe the
protocol we used for the test, and then discuss the results we
obtained from the test.
In our test, we considered only two parameter passing
mechanisms: Value-Result and Name. These were two of the
mechanisms our students were least familiar with, i.e., they had
not learned about them in any earlier course. We had covered all
the parameter passing mechanisms in the class two weeks before
the test.
We used a crossover design: we randomly divided the class into
two sections. We used each section as the control group for one
parameter passing mechanism and as the test group for the other.
The control group was provided with the version of the tutor that
provided minimal feedback, viz., whether the answer was correct
or not, and the correct answer. The test group was provided with
the version of the tutor that provided detailed feedback as
described in Section 2.2. We conducted a pretest and a posttest for
each parameter passing mechanism. The test questions themselves
were generated by the tutor, but were typeset and printed in
hardcopy. The students were told that the best of the pretest and
posttest scores for each parameter passing mechanism would be
counted towards their course grade (Together, the two scores
accounted for 10% of the course grade). Therefore, our students
went through the following sequence of steps:
1. The entire class worked with parameter passing by reference
to familiarize itself with the tutor.
2. The entire class took a pretest on value-result for 8 minutes.
3. One group acted as the control and the other group as the test
for practicing with the tutor on value-result. The groups
practiced for 12 minutes. Each student was seated at a
separate computer during the practice session.
4. The entire class took a posttest on value-result for 8 minutes.
5. The entire class filled out a feedback form on the tutor they
had just used.
6. Steps 2-5 were repeated for name, with the control and test
groups switched.
After eliminating floor and ceiling effect, 11 out of 13 student
scores improved from the pretest to the posttest, confirming our
first hypothesis that using the tutoring system would result in a
systematic improvement in the learning of our students.
According to binomial test, the 2-tailed p-value is 0.0224,
confirming that there was a systematic change from the pretest to
the posttest performance. We are confident in ascribing this
change to the use of the tutor because we had minimized
extraneous influences students did not take a break between
tests, they did not have access to textbook or any other reference
material during the test, and they were not allowed to discuss
among themselves during the test.
A mixed factorial analysis of the test scores indicated no
statistically significant difference between the improvement in the
scores of the control group with minimal feedback and the test
group with detailed feedback (p > .10). This finding does not,
however, rule out the possibility that detailed feedback is better
than minimal feedback because our sample size was small (n=8).
However, the feedback we received in Step 5 of the protocol shed
some light on this issue. The following two feedback statements
written by students are noteworthy: Did not get good feedback,
The first one was easier to understand because it showed you
what you did wrong. These statements were written by students
on feedback forms after testing parameter passing by name. These
students had used the tutor with detailed feedback for value-result,
before using it with minimal feedback for name.
Again, the student response to the question: The tutor helped me
learn new material on the feedback form was consistent. The
group that had used the tutor with detailed feedback for value-
result, followed by minimal feedback for name averaged 2.25 (on
a Likert scale of 1 for Strongly agree and 5 for Strongly disagree)
for Value-Result and 2.75 for Name. The other group, that had
used the tutor with minimal feedback for value-result, followed by
detailed feedback for name averaged 2.25 for value-result and
1.75 for name.
4. COMPARISON WITH RELATED WORK
Problem-based learning improves long-term retention [8]. A
tutoring system such as ours has several advantages over
textbooks, the traditional source of problems for students:
1. The tutor can instantaneously grade the users answer and
provide feedback.
2. The tutor can provide detailed feedback unlike printed
textbooks.
3. Our tutor can generate an unlimited supply of problems,
thereby providing as much practice with problem-solving as
the learner wants.
4. Instructors may use our tutor to assign homework or even
administer tests without the fear of plagiarism. They may
also use it to promote active learning, and for distance
education.
Tutoring systems such as ours have been developed for
quantitative disciplines such as Physics (e.g., CAPA [9]), and
electronics and control systems (e.g., CHARLIE [4]). Examples of
such tutoring systems developed for Computer Science include
PILOT [6], SAIL [7] and Gateway labs [3]. PILOT is a problem
generation tool for graph algorithms, SAIL is a LaTeX-based
scripting tool for problem generation, and Gateway Labs generate
problems on mathematical foundations of Computer Science. In
our work, we have attempted to build tutoring systems for
problems based on programs, problems for which the answers
may not always be quantitative [13,14,15]. MuLE [5] has an
objective similar to our tutoring system to contrast the various
parameter passing mechanisms. However, it achieves this
objective by a different means by providing an interpretive
environment in which the learner can try out code segments.
WebToTeach [2] is another work similar to our work, but it
administers problems created by the instructor, and does not itself
generate the problems.
The use of problem generation systems has been shown to
increase student performance by 10% in Physics [11], largely due
to increased time spent on the task. Our evaluations seem to
172
support this result, indicating that tutoring systems do have a role
to play in Computer Science higher education.
5. FUTURE WORK
We have developed tutoring systems for other selected topics in
Computer Science, including expression evaluation in C++ [12],
pointers for indirect addressing in C++ [13] and nested selection
statements in C++ [15]. Each tutor consists of the following
modules:
1. Domain Module which implements the particular domain;
2. Problem Module which generates the next problem;
3. Expert Module which solves the generated problem;
4. Student Module which maintains data about the user;
5. Tutor Module which generates feedback; and
6. User Interface which implements the View/Controller in the
Model-View-Controller pattern.
The User Interface, Student Module and parts of the Tutor
Module can be reused from one tutor to the next. The remaining
modules are distinct to each tutor. Our long-term goal is to
abstract out the common components, and develop a reusable
framework for the implementation of tutoring systems. We need
to gain more experience in building tutors before we can develop
such a framework.
We plan to incorporate into our tutor Error Flagging and
Immediate Feedback, two other types of feedback often found in
Intelligent Tutoring Systems[1]. We plan to continue to test the
tutor in future sections of our Comparative Programming
Languages course, as well as with Computer Science students not
enrolled in the course.
The tutor is implemented as a Java applet so that it can be
accessed over the Web without constraints of time and space. The
tutor uses Swing classes, consists of 18 classes, and is about
207K in size. It is currently available over the Web at
http://orion.ramapo.edu/~amruth/problets
6. ACKNOWLEDGEMENTS
The authors gratefully acknowledge the assistance of Dr. Gordon
Bear in analyzing the results of testing the tutor.
Partial support for this work was provided by the National
Science Foundations Course, Curriculum and Laboratory
Improvement Program under grant DUE-0088864.
7. REFERENCES
[1] Anderson J.R., Corbett A.T., Koedinger K.R. and Pelletier R.
Cognitive Tutors: Lessons Learned. In The Journal of the
Learning Sciences, Vol 4(2), 1995, Lawrence Erlbaum
Associates, Inc., 167-207.
[2] Arnow, D. and Barshay, O. WebToTeach: An Interactive
Focused Programming Exercise System. in Proceedings of
FIE 99 (San Juan, Puerto Rico, November 1999), IEEE
Press, Session 12a9.
[3] Baldwin, D. Three years experience with Gateway Labs. in
Proceedings of ITiCSE 96 (Barcelona, Spain, June 1996),
ACM Press, 6-7.
[4] Barker, D.S. CHARLIE: A Computer-Managed Homework,
Assignment and Response, Learning and Instruction
Environment, in Proceedings of FIE 97 (Pittsburgh, PA,
November 1997), IEEE Press.
[5] Barr J., and Smith King, L.A. Teaching Programming
Languages by Counter-Example. In Proceedings of the
Eleventh Annual Eastern Small College Computing
Conference (New Rochelle, NY, October 1995), 48-54.
[6] Bridgeman, S., Goodrich, M.T., Kobourov, S.G., and
Tamassia, R. PILOT: An Interactive Tool for Learning and
Grading. in Proceedings of SIGCSE 00 (Austin, TX, March
2000), ACM Press, 139-143.
[7] Bridgeman, S., Goodrich, M.T., Kobourov, S.G., and
Tamassia, R. SAIL: A System for Generating, Archiving, and
Retrieving Specialized Assignments Using LaTeX. in
Proceedings of SIGCSE 00 (Austin, TX, March 2000),
ACM Press, 300-304.
[8] Farnsworth, C. C. Using computer simulations in problem-
based learning. In Proceedings of Thirty Fifth ADCIS
conference (Nashville, TN, 1994), Omni Press, 137-140.
[9] Kashy, E., Sherrill, B.M., Tsai, Y., Thaler, D., Weinshank,
D., Engelmann, M., and Morrissey, D.J. CAPA, An
Integrated Computer Assisted Personalized Assignment
System. American Journal of Physics, 61(12), (1993), 1124-
1130.
[10] Locke J. An Essay Concerning Human Understanding. Book
2, Chapter 1, Section 2, Britannica Great Books, 1952.
[11] Kashy E., Thoennessen, M., Tsai, Y., Davis, N.E., and
Wolfe, S.L. Using Networked Tools to Enhance Student
Success Rates in Large Classes. in Proceedings of FIE 97
(Pittsburgh, PA, November 1997), IEEE Press.
[12] Krishna A. and Kumar A. A Problem Generator to Learn
Expression Evaluation in CS I and its Effectiveness. The
Journal of Computing in Small Colleges. (to appear).
[13] Kumar A. Learning the Interaction between Pointers and
Scope in C++, Proceedings of The Sixth Annual Conference
on Innovation and Technology in Computer Science
Education (ITiCSE 2001), Canterbury, UK, (June 2001), 45-
48.
[14] Kumar A.: Dynamically Generating Problems on Static
Scope, Proceedings of The 5th Annual Conference on
Innovation and Technology in Computer Science Education
(ITiCSE 2000), Helsinki, Finland, (July 2000), 9-12.
[15] Singhal N., and Kumar A. Facilitating Problem-Solving on
Nested Selection Statements in C/C++. In Proceedings of
FIE 00 (Kansas City, MO, October 2000), IEEE Press.



173

Figure 2: The clockwise flow of action in the tutor - A problem on parameter passing by value-result in progress.

174

You might also like