Professional Documents
Culture Documents
DOI 10.1007/s10763-016-9733-y
& Mark
W. Aulls 1 &
Abstract Inquiry engagement is a newly defined construct that represents the participation
in carrying out practices of science and engineering to achieve learning outcomes and is
influenced by learners personalities and teachers roles. Expectancy value theory posits
that attainment values are important components of task values that, in turn, directly
influence students achievement-related choices and performance. The current paper
developed and validated the McGill Attainment Value for Inquiry Engagement Survey
(MAVIES) with undergraduate students in STEM disciplines. The MAVIES is a 67-item,
learner-focused survey that addresses four components that are theoretically important for
engaging in scientific inquiry: (a) teachers roles, (b) students personalities, (c) inquiry
learning outcomes, and (d) practices of science and engineering. Evidence for internal
consistency and construct, content, and criterion validity was obtained from 85 undergraduates who had experience with scientific inquiry in diverse STEM fields. Confirmatory
factor analyses confirmed factors that were consistent with role theory, Big Five personality
traits, revised Blooms learning outcomes, and the Next Generation Science Standards. The
MAVIES instrument is a reliable and valid instrument for measuring undergraduate
students attainment values for scientific inquiry engagement in STEM disciplines.
Keywords Attainment value . Inquiry engagement . NGSS . Practices of science and
engineering . Scientific inquiry . STEM
* Ahmed Ibrahim
ahmed.ibrahim@ucr.edu
Ibrahim et al.
scaffolding learning and motivating the learner. Accordingly, in this paper, we defined
four dimensions for inquiry engagement, which are: (a) teachers roles, (b) students
personalities, (c) practices of science and engineering, and (d) inquiry learning outcomes. We found that engagement in inquiry was previously used (Peterson, 2012),
however with the emphasis on inquiry phases and the inquiry cycle, which has been
widely criticized (e.g. Moeed, 2013; Rudolph, 2005; Talanquer, Tomanek, &
Novodvorsky, 2013; Tang, Coffey, Elby, & Levin, 2010; Windschitl, 2004), and
without including the student or the teacher as part of the engagement in inquiry.
The objective of this paper is to develop and validate an instrument that measures
students assigned importance to the dimensions of inquiry engagement through the
participation in carrying out the practices of science and engineering to understand
disciplinary knowledge.
Ibrahim et al.
engagement through the four dimensions (to be explained in more details in the next
sections) and especially with the emphasis on the practices of science and engineering is
timely and needed.
Teachers Roles
The concept of role is one of the most popular ideas in the social sciences and can be
defined as characteristic behavior patterns (Biddle, 1986, p. 67). Role theory deals
with the organization of social behavior at both the individual and the collective
levels (Turner, 2001, p. 233). In inquiry- and learner-centered settings, teachers (and
learners) take on different and diverse roles (Aulls, Kaur Magon, & Shore, 2015;
Crawford, 2000; Tudor, 1993; Walker & Shore, 2015). The teachers roles of motivator
and encourager were among those identified as common between effective instructors
and effective inquiry instructors, whereas the teachers role as a model was only found
among effective inquiry instructors (Aulls & Ibrahim, 2012).
Students Personalities
Engaging in science involves being inventive, creative, systematic, reflective, sharing,
and collaborating (Aulls & Shore, 2008). The widely cited Big Five personality traits
theory contains five broad dimensions (Openness, Conscientiousness, Extraversion,
Agreeableness, and Neuroticism) that can be used to describe the human personality
(Goldberg, 1993). In a meta-analytic study of sample sizes ranging over 70,000 mainly
in tertiary education, Poropat (2009) found that academic performance was significantly
correlated with agreeableness, conscientiousness, and openness. In the current study, we
focused on openness, conscientiousness, and extraversion as student personalities (SPs)
that are important to assess how much they are valued when engaging in scientific
inquiry.
Openness to experience refers to engagement in idea-related behavior such as
intellectuality, creativity, unconventionality, and innovation (Ashton & Lee, 2001,
2007), propensity for innovation and astuteness in solving problem (Buss, 1991,
1996), permeability, breadth, and depth of consciousness, the need to enlarge and
examine experience (McCrae & Costa, 1996, 1997), intellectuality and creativity (van
Lieshout, 2000), creativity and unusual thinking (Nettle, 2006), imagination (McCrae,
1993), and intrinsically motivated curiosity and creativity (MacDonald, 1995, 1998).
Openness to experience is strongly related to scientific creativity (Simonton, 2004).
Conscientiousness refers to engagement in task-related behaviors such as being
organized and disciplined (Ashton & Lee, 2001, 2007) and adherence to plans,
schedules, and requirements (McCrae & Costa, 1996, 1997). Komarraju, Karau,
Schmeck, and Avdic (2011) found that conscientiousness and agreeableness were
positively related with four learning styles (synthesis and analysis, methodical study,
fact retention, and elaborative processing), in a sample of 308 undergraduates.
Extraversion refers to engagement in socially related behaviors, such as expressiveness and sociability (Ashton & Lee, 2001) and is considered important for engagement
in authentic scientific inquiry. Sharing, collaborating, and contributing are essential
components of constructing scientific arguments and engaging in a scientific community as shown by the ethnographic and historical study of laboratory groups and
research domains (Collins & Pinch, 1993; Latour & Woolgar, 1986; Mody, 2015; NRC,
2012; Pickering, 1995).
Inquiry-Learning Outcomes
Inquiry-learning outcomes (ILOs) include skills, dispositions, motivation, and development
of expertise (Saunders-Stewart, Gyles, & Shore, 2012; Shore et al., 2009). Other learning
outcomes that could result from engagement in scientific inquiry are related to educational
objectives such as comprehension, analysis, applying knowledge, evaluation, and synthesis. Saunders-Stewart et al. (2012) distilled from the literature a list of 23 student outcomes
from participation in inquiry, among which three addressed different kinds of understanding, application of knowledge was explicit, two specifically included self-regulation and
metacognition (although the word evaluation was not used), and another mentioned
authentic products (related to synthesis). Using principal components analysis with the 23
items, Saunders-Stewart, Gyles, Shore, & Bracewell (2015) identified learning competencies, motivation, student roles, and teacher roles as four components for these 23 outcomes,
supporting our present focus on instructors roles, student personalities, and inquiry
outcomes.
Practices of Science and Engineering
Using the idea of practices represents a major turn toward ontology in science (Woolgar &
Lezaun, 2013). The Next Generation Science Standards (NGSS) emphasized that this
approach is considered a major improvement over previous views of describing science
(NRC, 2012). The NRC proposed eight Practices of Science and Engineering (PSE; Next
Generation Science Standards Lead States [NGSS], 2013; NRC, 2012), which were: (a)
asking questions (for science) and defining problems (for engineering), (b) developing and
using models, (c) planning and carrying our investigations, (d) analyzing and interpreting
data, (e) using mathematics and computational thinking, (f) constructing explanations (for
science) and designing solutions (for engineering), (g) engaging in argument from evidence, and (h) obtaining, evaluating, and communicating information.
Objectives
Based on the above review, our specific goals were to: (a) develop a learner-focused
survey instrument that measures STEM undergraduate students attainment values for
inquiry engagement, and (b) validate this instrument based on theoretical and conceptual
grounds by examining its construct, content, and criterion validity.
Method
Item Development of the McGill Attainment Value for Inquiry Engagement
Survey (MAVIES) in STEM
We searched the literature for instruments and items that could provide an item pool for
a survey we intended to create to measure STEM students self-assessment of how
Ibrahim et al.
much they value (or assign importance to) different components of inquiry engagement. For example, Pedaste et al. (2015) listed 32 articles describing instruments and
frameworks that contained and described several items and components potentially
useful to address different components of the instrument we intended to create.
We decided to use the McGill Strategic Demands of Inquiry Questionnaire (MSDIQ;
Shore et al., 2012) as a basis for constructing the item pool for the survey of STEM
students attainment values for four components of scientific inquiry. These components
were (a) teachers roles, (b) students personalities, (c) learning outcomes, and (d)
practices of science and engineering. The MSDIQ is a 79-item, criterion-referenced,
learner-focused questionnaire that addressed the strategic demands of inquiry, in order
to emphasize the presence of relevant knowledge, skills, or predispositions that are
deemed curricular imperatives in the inquiry literature (p. 319). We chose the MSDIQ
because it contained relevant and sufficient items that represented each of the four
components that we sought to measure. More specifically, it contained items addressing
teachers roles, students personalities, learning outcomes, and practices of science and
engineering. We acknowledge that the MSDIQ instrument did not address all the PSE in
the NGSS; however, the MSDIQ represented a rich repertoire of items that addressed six
of the eight PSE in the NGSS (see Tables 1, 2, 3, and 4).
An expert panel reviewed the MSDIQ items to assess the adequacy of the items as a
basis for a survey of STEM students attainment values of the teachers roles, students
personalities, inquiry-learning outcomes, and PSE in the context of scientific inquiry.
Twelve items (including two distractors) variables were excluded from the analysis of the
original 79-item MSDIQ instrument because they did not discriminate among students.
After confirming the relevance of the remaining 67 items, the survey was constructed and
administered with an 11-point Likert-type scale, as in the original MSDIQ. In order to make
sure that the students answered the questions in the way we intended them to (i.e. to rate the
importance of engaging in each item in inquiry-based learning and teaching), we stated
specifically Engaging in an inquiry task has several possible elements. We would like to
know how you rate the importance of the following items. Each item is prefaced
by the question: how important is it in inquiry-based learning and teaching to
Thus, students answered the questions in reference to engagement in inquirybased learning and teaching. Also, because inquiry has several meanings, we
highlighted to the respondents that: the pedagogical approach known as inquiry
Table 1 Item loadings for teachers roles (TRs)
Factor
Item
.66
.57
18
.48
19
.33
Provide a mentor
.95
.62
.52
Loading
Extraversion
Openness
Conscientious-ness
Item
Loading
.75
Share decision-making
.73
.66
28
.51
44
.43
42
.86
50
.54
25
.45
59
.22
24
.94
23
.79
21
.50
10
.28
has many possible meanings. Our intention was to make the students focus on the
tasks that constitute inquiry across several possible definitions of inquiry that they may
hold. In other words, we acknowledged the fact that students may have different
definitions and conceptions of inquiry, which is very plausible given that inquiry has
several definitions in the literature and by experts (NRC, Council 2012). Our focus was
to measure students ratings of importance of the content of the items across the
Table 3 Item loadings for inquiry learning outcomes (ILOs)
Factor
Loading
15
.73
20
.58
13
.40
Understanding metacognitive
knowledge
40
.70
41
43
.40
Application
39
.72
.66
60
.60
Evaluation
Creating
66
.83
64
.74
.64
46
53
.88
52
.73
67
.41
Ibrahim et al.
Table 4 Item loadings for practices of science and engineering (PSE)
Factor
Plan investigation
Communicate knowledge
37
.69
.67
36
.65
12
11
.73
16
Understand instructions
.59
26
.30
17
.86
22
Make a plan
.72
14
.65
27
Make suggestions
.52
29
.41
32
.24
34
Record data
.94
35
Classify data
.90
31
.50
47
.78
45
.67
51
.65
33
.60
63
.75
62
Explain results
.70
61
.68
49
48
65
.60
30
56
.83
55
.71
.35
54
.70
58
.65
57
.63
variability of inquiry. For example, when students were asked to rate the importance of
understanding key concepts, the focus was on the rating of importance assigned to
understanding across multiple possible different views of the meaning of inquiry
because the different ratings (despite the possible different views of the meaning of
inquiry) would be related to the single domain of inquiry-based learning and teaching.
The main point was to make sure that students responses were accurate enough as they
referred to a single domain of inquiry, and not multiple domains of inquiry, and by
acknowledging the possible variability in precision given the different conceptions of
inquiry. Acknowledging these facts and stating them in the survey gave us strong
validity for the instrument. For a discussion of the difference between accuracy and
precision, please refer to Bevington and Robinson (2003).
In our conceptualization of the MAVIES instrument, we envisaged an instrument
that could be used to assess how much students assign value (importance) to components of engagement in inquiry. The PSE in MAVIES are treated as separate and
independent activities that correspond to the PSE in the NGSS framework. We did not
impose any sequence in respect of critiques that in the literature about stepwise,
discrete, sequential, stage, or phase organization of different components involved in
scientific inquiry. Sequences do not represent authentic inquiry, can distract students,
and can give them an unrealistic view about engaging in scientific inquiry or practices
of scientists and engineers (Moeed, 2013; Rudolph, 2005; Talanquer et al., 2013; Tang,
Coffey, Elby, & Levin, 2010; Windschitl, 2004). We combined constructing explanations and engaging in argument from evidence because we took a pluralistic
approach to explanation and reasoning (Lombrozo, 2009, 2010). This nonsequential
theoretical stance influenced our analytic steps in proposing and analyzing factors of
attainment values separately.
Instrument Administration
Typical case (Kuzel, 1999; Patton, 1990) sampling strategy was used, which highlights what is normal or average and increases confidence in conclusions (Miles &
Huberman, 1994, p. 28). We sought to include a large sample of undergraduates in as
many STEM disciplines as possible, so the instrument could be used with students
enrolled in any STEM field, not limited to a particular discipline, or university year. We
treated the sample as a homogenous group representing undergraduate students in
STEM fields. The sample comprised 85 undergraduate students in architecture, biochemistry, biology, chemical engineering, chemistry, computer engineering, computer
science, electrical engineering, environmental science, materials engineering, mathematics, mechanical engineering, and physics. This sample size (N = 85) relative to the
size of the subscale with the maximum number of items (q = 7) exceeds the minimum
suggested value for the ratio of sample size (N) to number of parameters to be estimated
(q) (Jackson, 2003; Kline, 2011) because we treated each subscale as a separate
instrument by assuming orthogonality between subscales based on the theoretical
grounding of using the different components of inquiry engagement orthogonally to
represent authentic realistic engagement in scientific inquiry (as discussed in the
previous section). Accordingly, the sample size was adequate and sufficient and
satisfied the N:q hypothesis within each subscale. Students in our sample indicated
that they had on average 4.1 years (SD = 2.6) experience in inquiry-based education,
ranging from 1 to 11 years; the median and mode were 5 years.
We used the Monte Carlo approach to determine the adequacy of the sample
size for statistical power (Brown, 2006). Because of our theoretical stance, we
treated all item clusters as belonging to single factors representing the four areas
Ibrahim et al.
of importance for engaging in scientific inquiry (teachers roles, students personalities, ILOs, and PSE). Each factorial model tested consisted of a single factor
and its underlying items. In the Monte Carlo population model, we used equal
loadings for the items on the single factor of each model tested. We used 10,000
replications and a random seed. The Monte Carlo power analysis with N = 85
supported that this sample size would give us 100 % power in each confirmatory
factor analysis (CFA) model tested.
Establishing Construct Validity
Construct validity refers to whether a given measure, or operational definition, actually
assesses the underlying conceptual variable, or construct, that the measure is intended to
represent (Bryant, 2000, p. 111), or degree of agreement with theoretical expectations
(Knapp & Mueller, 2010, p. 340). The factors of each of the four clusters of the MAVIES
represented constructs that are strongly supported theoretically and conceptually.
Teachers roles are supported in the inquiry literature (Aulls & Ibrahim, 2012;
Saunders-Stewart, Gyles, Shore, & Bracewell, 2015; Walker & Shore, 2015) and by
role theory (Turner, 2001) in social psychology. Students personalities (openness,
conscientiousness, and extraversion) match the three factors of the Five Factor Model
(FFM) (Goldberg, 1993). Students personalities are positive predictors of academic
performance (Poropat, 2009). The inquiry learning outcomes (understanding conceptual
knowledge, understanding metacognitive knowledge, application, evaluation, and creation) are based on the same cognitive learning objectives in Blooms taxonomy and its
revision (Anderson et al., 2001; Bloom, Engelhart, Furst, Hill, & Krathwohl, 1956;
Krathwohl, 2002). Finally, the PSE are based on the NRCs (2012) NGSS conceptual
model and framework for science education.
Establishing Content Validity Using Confirmatory Factor Analysis
Content validity refers to expert judgments of the representativeness of items with
respects to skills, knowledge, etc. domain to be covered (Knapp & Mueller, 2010, p.
340). First, we used experts judgments of the representativeness of MAVIES items for
each of the factors in its four components. Three experts in scientific inquiry, three
experts in higher education, and three experts in science education agreed 100 % that
the items of the instrument represented corresponding factors. Second, we used CFA
for the psychometric evaluation of the instrument and for validation (Brown, 2006).
CFA requires theoretical or conceptual frameworks for specifying the item-factor
relations (Brown, 2006). In the four MAVIES components, we relied on adequate
theories to specify the CFA models.
CFA Model Specification. Item-factor relations were identified conceptually based on
experts judgments by referring to relevant theories before CFA testing. We used the
role theory for teachers roles, the FFM for students personality, and Blooms taxonomy of educational objectives (and its revision) for the ILOs and the NRC (2012)
framework for PSE. All factors in the models for the four components of the instrument
were considered latent rather than emergent. Latent factors explain variance in its
measured indicator variables and induces covariance among them whereas, in
Ibrahim et al.
correlated errors can arise due to (a) items that are very similarly worded, (b) items
that are reverse worded, or (c) items that are differentially prone to social desirability (Brown, 2006, p. 181).
CFA Factor Loadings and Error Residuals. The standardized residual variance for
indicators (observed or manifest variables) is calculated as [1-(variance accounted for by
the latent factor)] and provides the proportion of variability in each manifest variable
that is not accounted for by the corresponding latent factor (Geiser, 2012, p. 50).
Factor Scores and Correlations. We computed scores for each latent factor by
calculating means of all indicator items that contributed to that factor. We used these
scores in regression analyses assessing criterion validity. Correlations among factors
were computed. Significance of each correlation was also calculated.
Results
Content Validity using Confirmatory Factor Analyses (CFA)
All fit and modification indices were better than recommended cutoff criteria (Brown,
2006; Hu & Bentler, 1998; Schermelleh-Engel et al., 2003; West et al., 2012). See
Tables 1, 2, 3, and 4.
Criterion Validation
Evidence for criterion validity was supported through consistent prediction of the
learning strategies (Biggs, 1987) from MAVIES. Surface Learning Strategies (SLS)
were consistently correlated negatively with all AV of the SPs, ILOs, and PSE (See
Table 5).
Reliability
Factorial internal-consistency statistics estimated reliability using Cronbachs alpha
computed for each latent factor of the instrument. Cronbachs alpha was .93 for the
entire set and above .92 for all factors in the four components of MAVIES. These
reliability assessments indicated that the MAVIES factors are homogenous, which was
expected given the theoretical underpinning of the instruments components.
Factor
Extraversion
18.2
0.5
Openness
23.5
1.1
Conscientiousness
18.8
0.6
Conceptual knowledge
22.4
1.0
Metacognitive knowledge
18.9
0.6
Application
21.0
0.8
Evaluation
17.2
0.4
Creation
16.7
0.3
20.5
0.7
17.9
0.4
Plan investigations
19.8
0.7
18.4
0.5
18.0
0.4
0.5
0.7
19.4
Ibrahim et al.
creating an instrument that would predict students achievement and performance. The
CFA confirmed our hypothesized structures for MAVIES. Regarding teachers roles,
we chose items that addressed teachers roles as encouragers and motivators, as well as
their roles as models and mentors. Although instructors can have a multitude of roles,
especially in inquiry-learning environments, these two roles specifically addressed the
motivational aspect (encouragement and motivation) and scaffolding (modeling and
mentoring) of instruction for engaging in scientific inquiry.
Regarding students personality, we chose the openness, conscientiousness, and extraversion personalities based on the FFM through items that represented these characteristics
in scientific inquiry. Including factors representing the attainment values of teachers roles
and students personalities showed that we addressed how students value teachers roles as
well as their own views about what qualities they perceive as important during the
engagement in scientific inquiry. Regarding the learning outcomes, we chose the conceptual and metacognitive understanding, application, evaluation, and creation based on the
educational objectives and learning outcomes in the cognitive domain from the Blooms
original and revised taxonomies. Regarding the practices of science and engineering, we
chose the PSE based on the NGSS.
MAVIES is a reliable and valid instrument that can be used with undergraduate
students in STEM disciplines to assess their attainment values for inquiry engagement.
The instrument fills a gap in the literature by providing a measure of the motivational
construct of attainment value, which is, in turn, related to Expectancy Value Theory in
the context of scientific inquiry. The MAVIES instrument is also consistent with the
NGSS (NGSS, 2013).
Theoretical Implications
The MAVIES instrument is useful to measure STEM students self-assessment of
their valuing of four components of engaging in scientific inquiry: teachers roles,
students personalities, inquiry learning outcomes, and practices of science and
engineering. These four components constitute some of the essential building
blocks of a theory of scientific inquiry in higher education. Inquiry engagement
is defined and described in this paper. It represents a necessary contribution to link
inquiry and practices of science and engineering.
Teachers Roles. The CFA of teachers roles showed that these roles (for which
attainment values were assigned) included (a) teachers roles as encouragers and
motivators and (b) teachers roles as models and mentors. These roles are
specifically vital in inquiry. Aulls and Ibrahim (2012) found that, when comparing effective inquiry instructors to just effective instructors, students reported
a higher frequency for effective inquiry instructors roles as encouragers and
motivators than those for just effective instructors. Considering instructors roles
as encouragers and motivators is important in an instrument that seeks to
measure students assessment of the importance of teachers roles during the
engagement in scientific inquiry. In their qualitative case study, Aulls and
Ibrahims students also mentioned that effective inquiry instructors roles include models and mentors, but not those of instructors in general. These roles
are important because they represent similarities to two distinct leadership roles
Ibrahim et al.
Practical Implications
MAVIES can be used to measure STEM students assignment of importance to components of engaging in scientific inquiry in experiments that assess the effect of instruction.
Instructors can also use MAVIES prior to instruction to guide curriculum preparation and
designing instructional activities. MAVIES can be used in program evaluation by asking
the question whether different STEM programs or courses are associated with significantly
different student ratings of importance to the components of scientific inquiry.
Limitations and Future Research
MAVIES included subscales for addressing six out of the eight PSE in the NGSS.
MAVIES did not include items assessing the NGSS practices of (a) developing and
using models and (b) using mathematics and computational thinking. In future revisions of the instrument, we shall add relevant items to represent the practices that were
not included and to increase the number of items for factors that are composed of three
items. Future research is also planned to validate the instrument in samples in the
humanities and social science disciplines and to examine the significance of differences
between disciplines regarding how undergraduates self-assessments of importance
References
Abd-El-Khalick, F., BouJaoude, S., Duschl, R. A., Lederman, N. G., Mamlok-Naaman, R., Hofstein, A.,
Tuan, H.-L. (2004). Inquiry in science education: International perspectives. Science Education, 88, 397
419.
American Association for the Advancement of Science. (1993). Benchmarks for scientific literacy. New York,
NY: Oxford University Press.
Anderson, L. W., Krathwohl, D. R., Airasian, P. W., Cruikshank, K. A., Mayer, R. E., Pintrich, P. R.,
Wittrock, M. C. (2001). A taxonomy for learning, teaching, and assessing: A revision of Blooms
taxonomy of educational objectives (complete edition) New York, NY: Longman.
Ashton, M. C. & Lee, K. (2001). A theoretical basis for the major dimensions of personality. European
Journal of Personality, 15, 327353. doi:10.1002/per.417.
Ashton, M. C. & Lee, K. (2007). Empirical, theoretical, and practical advantages of the HEXACO Model of
personality structure. Personality and Social Psychology Review, 11, 150166. doi:10.1177/
1088868306294907.
Aulls, M. W., & Ibrahim, A. (2012). Pre-service teachers perceptions of effective inquiry instruction: Are
effective instruction and effective inquiry instruction essentially the same? Instructional Science, 40, 119
139. doi:10.1007/s11251-010-9164-z.
Aulls, M. W., & Shore, B. M. (2008). Inquiry in education (Vol. 1): The conceptual foundations for research as
a curricular imperative. New York, NY: Erlbaum.
Aulls, M. W., Kaur Magon, J., & Shore, B. M. (2015). The distinction between inquiry-based instruction and
non-inquiry-based instruction in higher education: A case study of what happens as inquiry in 16
education courses in three universities. Teaching and Teacher Education, 51, 147161. doi:10.1016/j.
tate.2015.06.011.
Azevedo, R. (2015). Defining and measuring engagement and learning in science: conceptual, theoretical,
methodological, and analytical issues. Educational Psychologist, 50, 8494. doi:10.1080/00461520.2015.
1004069.
Bales, R. F. & Slater, P. E. (1955). Role differentiation in small decision-making groups. In T. Parsons & R. F.
Bales (Eds.), Family, socialization, and interaction processes (pp. 259306). Glencore, IL: Free Press.
Bandalos, D. L. & Finney, S. J. (2010). Factor analysis: Exploratory and confirmatory. In G. R. Hancock & R.
O. Mueller (Eds.), The reviewers guide to quantitative methods in the social sciences (pp. 93114). New
York, NY: Routledge.
Barrow, L. (2006). A brief history of inquiry: From Dewey to standards. Journal of Science Teacher
Education, 17, 265278. doi:10.1007/s10972-006-9008-5.
Battle, A. & Wigfield, A. (2003). College womens value orientations toward family, career, and graduate
school. Journal of Vocational Behavior, 62, 5675. doi:10.1016/S0001-8791(02)00037-4.
Berland, L. K., Schwarz, C. V., Krist, C., Kenyon, L., Lo, A. S. & Reiser, B. J. (2015). Epistemologies in
practice: Making scientific practices meaningful for students. Journal of Research in Science Teaching.
doi:10.1002/tea.21257.
Bevington, P. R. & Robinson, D. K. (2003). Data reduction and error analysis for the physical sciences.
Boston, MA: McGraw-Hill.
Biddle, B. J. (1986). Recent developments in role theory. Annual Reviews in Sociology, 12, 6792.
Biggs, J. D1987]. Learning Process Questionnaire manual: Student approaches to learning and studying.
Retrieved from http://www.eric.ed.gov/ERICWebPortal/Home.portal?_nfpb=true&ERICExtSearch_
SearchType_0=no&_pageLabel=ERICSearchResult&ERICExtSearch_SearchValue_0=no%
3Aed308199&spelling=yes.
Bloom, B. S., Engelhart, M. D., Furst, E. J., Hill, W. H. & Krathwohl, D. R. (1956). Taxonomy of educational
objectives: The classification of educational goals. New York, NY: McKay.
Blumenfeld, P. C., Kempler, T. M. & Krajcik, J. S. (2006). Motivation and cognitive engagement in learning
environments. In R. K. Sawyer (Ed.), The Cambridge handbook of the learning sciences (pp. 475488).
Cambridge, England: Cambridge University Press.
Brown, T. A. (2006). Confirmatory factor analysis for applied research. New York, NY: Guilford Press.
Ibrahim et al.
Bryant, F. B. (2000). Assessing the validity of measurement. In L. G. Grimm & P. R. Yarnold (Eds.), Reading
and understanding more multivariate statistics (1st ed., pp. 99146). Washington, DC: American
Psychological Association.
Buss, D. M. (1991). Evolutionary personality psychology. Annual Review of Psychology, 42, 459491.
Buss, D. M. (1996). Social adaptation and five major factors of personality. In J. S. Wiggins (Ed.), The five
factor model of personality: Theoretical perspectives (pp. 180207). New York, NY: Guilford Press.
Campbell, T., Abd-Hamid, N. & Chapman, H. (2010). Development of instruments to assess teacher and
student perceptions of inquiry experiences in science classrooms. Journal of Science Teacher Education,
21, 1330. doi:10.1007/s10972-009-9151-x.
Chi, M. T. H. & Wylie, R. (2014). The ICAP framework: linking cognitive engagement to active learning
outcomes. Educational Psychologist, 49, 219243. doi:10.1080/00461520.2014.965823.
Clinton, V. (2014). The relationship between students preferred approaches to learning and behaviors during
learning: An examination of the process stage of the 3P model. Instructional Science, 42, 817837. doi:
10.1007/s11251-013-9308-z.
Cole, J. S., Bergin, D. A. & Whittaker, T. A. (2008). Predicting student achievement for low stakes tests with
effort and task value. Contemporary Educational Psychology, 33, 609624. doi:10.1016/j.cedpsych.2007.
10.002.
Collins, H. L. & Pinch, T. (1993). The golem: What everyone should know about science. Cambridge,
England: Cambridge University Press.
Crawford, B. A. (2000). Embracing the essence of inquiry: New roles for science teachers. Journal of
Research in Science Teaching, 37, 916937. doi:10.1002/1098-2736(200011)37:9<916::AID-TEA4>3.
0.CO;2-2.
Eccles, J. S. (2005). Subjective task value and the Eccles et al. model of achievement-related choices. In A. J.
Elliott & C. S. Dweck (Eds.), Handbook of competence and motivation (pp. 105121). New York, NY:
Guilford Press.
Eccles, J. S., Adler, T. F., Futteman, R., Goff, S., Kaczala, C. M. & Meece, J. (1983). Expectancies, values, and
academic behaviors. In J. T. Spence (Ed.), Achievement and achievement motives (pp. 75146). San
Francisco, CA: Freeman.
Eccles, J. S., ONeill, S. A. & Wigfield, A. (2005). Ability self-perceptions and subjective task values in
adolescents and children. In K. A. Moore & L. H. Lippman (Eds.), What do children need to flourish?
(Vol. 3, pp. 237249). New York, NY: Springer.
Eccles, J. S., Wigfield, A., Harold, R. D. & Blumenfeld, P. D1993]. Age and gender differences in childrens
self- and task perceptions during elementary School. Child Development, 64, 830847. Retrieved from
http://www.jstor.org/stable/1131221.
Entwistle, N. (1991). Approaches to learning and perceptions of the learning environment. Higher Education,
22, 201204.
Fredricks, J. A., Blumenfeld, P. C. & Paris, A. H. (2004). School engagement: Potential of the concept, state of
the evidence. Review of Educational Research, 74, 59109. doi:10.3102/00346543074001059.
Geiser, C. (2012). Data analysis with Mplus. New York, NY: Guilford Press.
Goldberg, L. R. (1993). The structure of phenotypic personality traits. American Psychologist, 48, 2634. doi:
10.1037/0003-066X.48.1.26.
Gott, R. & Duggan, S. (2002). Problems with the assessment of performance in practical science: which way
now? Cambridge Journal of Education, 32, 183201. doi:10.1080/03057640220147540.
Greeno, J. G., Collins, A. M. & Resnick, L. B. (1996). Cognition and learning. In D. C. Berliner & R. C.
Calfee (Eds.), Handbook of educational psychology (pp. 1546). New York, NY: Prentice Hall.
Hanauer, D. I., Hatfull, G. F. & Jacobs-Sera, D. (2009). Assessing scientific inquiry. New York, NY: Springer.
Hu, L.-T. & Bentler, P. M. (1998). Fit indices in covariance structure modeling: Sensitivity to
underparameterized model misspecification. Psychological Methods, 3, 424453. doi:10.1037/1082989x.3.4.424.
Jaber, L. Z., & Hammer, D. (2015). Engaging in science: A feeling for the discipline. Journal of the Learning
Sciences, 147. doi:10.1080/10508406.2015.1088441
Jackson, D. L. (2003). Revisiting sample size and number of parameter estimates: Some support for the N:q
hypothesis. Structural Equation Modeling, 10, 128141. doi:10.1207/S15328007SEM1001_6.
Johnson, M. L. & Sinatra, G. M. (2013). Use of task-value instructional inductions for facilitating engagement
and conceptual change. Contemporary Educational Psychology, 38, 5163. doi:10.1016/j.cedpsych.2012.
09.003.
Kline, R. B. (2011). Principles and practice of structural equation modeling (3rd ed.). New York, NY:
Guilford Press.
Ibrahim et al.
Pedaste, M., Meots, M., Siiman, L. A., de Jong, T., van Riesen, S. A. N., Kamp, E. T., . Tsourlidaki, E.
(2015). Phases of inquiry-based learning: Definitions and the inquiry cycle. Educational Research
Review, 14, 4761. doi:10.1016/j.edurev.2015.02.003
Peterson, C. A. (2012). Mentored engagement of secondary science students, plant scientists, and teachers in
an inquiry-based online learning environment. (Unpublished doctoral dissertation), Texas A&M
University, College Station, TX.
Pett, M. A., Lackey, N. R. & Sullivan, J. J. (2003). Making sense of factor analysis: The use of factor analysis
for instrument development in health care research. Thousand Oaks, CA: Sage.
Pickering, A. (1995). The mangle of practice: Time, agency, and science. Chicago, IL: University of Chicago
Press.
Poropat, A. E. (2009). A meta-analysis of the five-factor model of personality and academic performance.
Psychological Bulletin, 135, 322338. doi:10.1037/a0014996.
Ramsden, P. (2003). Approaches to learning. In P. Ramsden (Ed.), Learning to teach in higher education (2nd
ed., pp. 3961). London, England: Routledge.
Redden, K. C., Simon, R. A., & Aulls, M. W. (2007). Alignment in constructivist-oriented teacher education:
Identifying pre-service teacher characteristics and associated learning outcomes. Teacher Education
Quarterly, 34, 149164. Retrieved from http://www.jstor.org/stable/23478999.
Roberts, R. & Gott, R. (2003). Assessment of biology investigations. Journal of Biological Education, 37,
114121. doi:10.1080/00219266.2003.9655865.
Roberts, R. & Gott, R. (2004). A written test for procedural understanding: a way forward for assessment in
the UK science curriculum? Research in Science & Technological Education, 22, 521. doi:10.1080/
0263514042000187511.
Roberts, R. & Gott, R. (2006). Assessment of performance in practical science and pupil attributes. Assessment
in Education: Principles, Policy & Practice, 13, 4567. doi:10.1080/09695940600563652.
Rudolph, J. L. (2005). Epistemology for the masses: the origins of the scientific method in American
schools. History of Education Quarterly, 45, 341376. doi:10.1111/j.1748-5959.2005.tb00039.x.
Saunders-Stewart, K. S., Gyles, P. D. T., Shore, B. M., & Bracewell, R. J. (2012). Student outcomes in inquiry:
Students perspectives. Learning Environments Research, 18, 289-311. doi:10.1007/s10984-015-9185-2.
Schermelleh-Engel, K., Moosbrugger, H. & Mller, H. D2003]. Evaluating the fit of structural equation
models: Tests of significance and descriptive goodness-of-fit measures. Methods of Psychological
Research, 8, 2374. Retrieved from http://www.dgps.de/fachgruppen/methoden/mpr-online/issue20/art2/
mpr130_13.pdf.
Shavelson, R. J., Solano-Flores, G. & Ruiz-Primo, M. A. (1998). Toward a science performance assessment
technology. Evaluation and Program Planning, 21, 171184. doi:10.1016/S0149-7189(98)00005-6.
Shore, B. M., Birlean, C., Walker, C. L., Ritchie, K. C., LaBanca, F., & Aulls, M. W. (2009). Inquiry literacy:
A proposal for a neologism. Learning Landscapes, 3, 139155. Retrieved from http://www.
learninglandscapes.ca/.
Shore, B. M., Chichekian, T., Syer, C. A., Aulls, M. W., &Frederiksen, C. H. (2012). Planning, enactment, and
reflection in inquiry-based learning: Validating the McGill Strategic Demands of Inquiry Questionnaire.
International Journal of Science and Mathematics Education, 10, 315337. doi:10.1007/s10763-0119301-4.
Simonton, D. K. (2004). Creativity in science: Chance, logic, genius, and Zeitgeist. Cambridge, England:
Cambridge University Press.
Slater, P. E. D1955]. Role differentiation in small groups. American Sociological Review, 20, 300310.
Retrieved from http://www.jstor.org/stable/2087389.
Spronken-Smith, R. D2010]. Undergraduate research and inquiry-based learning: Is there a difference? Insights
from research in New Zealand. CUR DCouncil on Undergraduate Research] Quarterly, 30, 2835.
Retrieved from http://www.cur.org/assets/1/7/Spronken-Smith.pdf.
Spronken-Smith, R. & Walker, R. (2010). Can inquiry-based learning strengthen the links between teaching
and disciplinary research? Studies in Higher Education, 35, 723740. doi:10.1080/03075070903315502.
Spronken-Smith, R., Walker, R., Batchelor, J., OSteen, B. & Angelo, T. (2012). Evaluating student perceptions of learning processes and intended learning outcomes under inquiry approaches. Assessment and
Evaluation in Higher Education, 37, 5772. doi:10.1080/02602938.2010.496531.
Stroupe, D. (2015). Describing science practice in learning settings. Science Education, 99, 10331040. doi:
10.1002/sce.21191.
Talanquer, V., Tomanek, D. & Novodvorsky, I. (2013). Assessing students understanding of inquiry: What do
prospective science teachers notice? Journal of Research in Science Teaching, 50, 189208. doi:10.1002/
tea.21074.