You are on page 1of 9

Discuss the potential issues and benefits of conducting research in a real world

environment.
Recent years have seen an increase in the availability, and use of, educational technology in
schools, pre-schools, youth centres and many other places. This is occurring globally, from
Lebanon to Kenya (Khan et al, 2012). The evaluation of technology, particularly in relation to
potential learning benefits, is an important topic. Firstly, this paper will define the term
educational technology research (ETR) and contextualise the argument by outlining
debates in the field. Subsequently, illustrating with examples, it will clarify when evaluation
should be undertaken in the wild (in a real world environment), rather than in a laboratory.
Finally, it will demonstrate how seemingly conflicting ideas about ETR may be incorporated
into one project. An example will illustrate how numerous of research in this field may be
overcome.
Educational technology is a unique area of study, with researchers who consider
themselves, not technicians, but applied learning theorists, aiming to, carefully and
purposefully, use theory to apply technology to learning environments to promote learning
(Jones, 2005, p.3). Czerniewicz defines it as encompassing the activities and knowledge
domain where education and technology intersect (2008b, p.171), noting that references are
more often made to educational technology as a field rather than a discipline. This is based
on Elys description (1999) of a discipline as having consistent beliefs and principles that
commit to particular paradigms and approaches, which in not the case in this field (more on
this below).
A longstanding debate continues about the limitations of ETR. Some claim it is lacking a
critical dimension, with insufficient studies using qualitative methods (Selwyn, 1997, p305).
Subsequent papers, by contrast, note a deficit in quantitative studies (Gardener and Galouli,
2004; Clark, 2013) whilst others confirm that mixed methods have been most common, with
qualitative studies increasing in recent years (Hui-Wen et al, 2013). Underwood (2004)
clarified that qualitative research has become more predominant (particularly in the UK). The
key to sound research is using appropriate methods, corresponding with a studys purpose
(Gardener and Galouli, 2004) thus a crucial element in improving ETR is ensuring strong
consistency between methods, results and conclusion (Bebell and ODwyer, 2010).
Another criticism of ETR is that it lacks theoretical perspective (Underwood, 2004) which,
potentially, separates the researchers (often computer scientists) from the education
community thus contributing to a gap between research and practice. As a result, schools

cannot keep up with the constant advances being made. This may be partly due to the
limited input educational practitioners have in research design, which is often undertaken by
technology specialists (Hui-Wen Huang et al, 2013). This lack of communication between
the two fields could be addressed by maintaining a stronger focus on theory, to underpin
evaluations and counterbalance fast-paced changes by bringing a constant aspect to
inquiries (Bennett and Oliver, 2011).
A related issue is that the context of policy and practice is also in constant flux (Gardener
and Galanouli, 2004, p153). Selwyn stated that the research is divorced from the wider
socio-political and economic context altogether (1997). Perhaps it is these changing
contexts, combined with limited theoretical perspectives, which has resulted in a lack of
continuity from between projects. A British Education and Technology Research Agency
funded project aimed to address this, compiling a review of past projects to ensure current
studies are informed (Rushby and Seawood, 2008). This lack of historical context in ETR
may also result from insufficient larger, or longitudinal, studies (Gardener and Galanouli,
2004; Underwood, 2004).
Other issues are debated in the field, as benefits of learning through access to technology
must be evaluated worldwide. Policymakers have often already made investments and thus
seek to prove their educational effectiveness (McNabb, Hawkes and Rouk, 2000). Hence
ETR can follow policy rather than driving it (Bennett and Oliver, 2011 citing Selwyn, 2007
and Conole et al., 2007). However, evidence of technologys impacts on learning should
precede investment. Yet it may be that flawed research is creating a challenging situation for
policymakers, who have trouble making good decisions about funding without sound
empirical evidence of its benefits (Bebell and ODwyer, 2010). Although policymakers and
technology providers responsible for funding studies control the research agenda thus
demand for post-hoc evaluations drives researchers to engage in particular types of
investigation. Therefore technology has come to drive learning rather than vice versa
(Rushby and Seawood, 2008).
These commissioned evaluations of educational technology are, as noted above, likely to be
undertaken by computer scientists. Historically, the field has roots in the design and
assessment of instructional technologies. Therefore investigations are likely to follow the
objective behaviourist tradition (Bennett and Oliver, 2011). Thus controversy has arisen
regarding appropriate methodology, as academics from different backgrounds are involved.
The fields increasingly interdisciplinary nature (with contributors from Psychology, Sociology,
Education and Information Technology) means that potentially competing paradigms are
supported, depending on a researchers orientation. Furthermore, whilst some argue for an

overarching paradigm, attempting to impose such measures would be futile, as ETR will
inevitably emerge with varying paradigms depending on the underlying motives
(Czerniewicz, 2008a). Some suggest that critical or emancipatory should be both
encouraged and valued to counter the dominance of studies lacking theory (Bennett and
Oliver, 2011, p187). There is also a need to study what meanings technology is given and
how, as well as in what context and by whom, in order to understand its varying uses and
potential in different situations and how this can change and develop (Johri, 2011). There is
also an argument for the place of critical theory in the field, wherein technology might be
critically assessed in relation to how it may maintain hegemonic power relations, or might
otherwise combat existing constructions of what education is and how it should be
approached (Hall, 2011). This challenges the assertion that technology is value neutral
(Jones, 2005). The question of the place of critical theory in the field is situated in a wider
discussion about paradigms.
Paradigms, as noted above, will be influenced by the research agenda. That is often decided
by policymakers seeking evidence their investments were worthwhile, or by designers
aiming to test a product. Such research is likely to ascribe to a positivist paradigm, whereby
quantitative data collection aims to provide objective results of an ontologically knowable
reality. By contrast, a project ascribing to a post-modern paradigm believes in socially
constructed reality whilst knowledge is epistemologically subjective. Hence traditional
experimental methods or quantitative data would not fit, since their worldview denies the
validity of objective measurable outcomes. The purpose of a study therefore delineates the
methods used, and these facets of the research are connected by underlying philosophical
beliefs about what can be known (epistemology) and the nature of being, of reality itself
(ontology). Therefore, no method is better than another, only more fitting (Borrego, Douglas
and Amelink, 2009; Roter and Frankel, 1992). Hence, where and how to conduct research
depends on the object and direction of study.
The connection between the purpose of a study and the approach may be illustrating using
two examples. Razak et al (2011) set out to compares benefits of laboratory experiments (in
controlled artificial environments) with those of field studies, or field experiments (in a natural
setting). Whilst field experiments are more likely to reflect traditional scientific methods, field
studies are more likely to use mixed methods or qualitative ethnographic approaches
(Jensen and Skov, 2005). In undertaking usability testing of a drawing application for fiveyear-olds, the known benefits of laboratories (such as controlled conditions, easier data
collection, more accurate equipment that is more readily available, and so on) were largely
confirmed by Razak et al. However, drawbacks such as the number of adults needed to
control and entertain the children, making the experiment labour intensive, were overcome in

the school setting (already a child-friendly environment staffed by experienced adults). The
authors concluded that conducting some research in the wild at an early stage of a project
could improve design quality for the subsequent laboratory stage. By contrast, another piece
of ETR investigated motivation for learning in boys using camcorders to produce online
videos based on hobbies such as skateboarding (Willett, 2009). Using social learning
theories such as communities of practice (how skills and knowledge develop through
participation in a community) to gain in-depth understanding based on social constructivist
theories, the study demanded long-term commitment (one year) and a natural setting for
observation. Social phenomena such as learning contain so many interacting factors that
traditional experimental designs dont yield effective information (McNabb et al, 2000, p3).
These contrasting examples highlight how the questions driving research dictate paradigms
and approaches. Hence there cannot be one pre-defined set of guidelines for a field
motivated by such different interests addressing such diverse problems.
One extensive project has started to address the need for ETR to develop additional
evaluation tools to measure whether students are learning the new basics such as
computer literacy, collaborative teamwork skills, and lifelong learning abilities (McNabb et al,
2000, p10). It also addresses challenges of introducing effective benefits to learning through
the use of Information Computer Technology in developing countries, which has been
highlighted as a problem (Khan et al, 2012). Dr Mitra, Chief Scientist at a Delhi training
consultancy, started the Hole in the Wall (HIW) project in 1999, installing a PC in a nearby
slum (quite literally into a hole in the wall). Key to the research paradigm was an
understanding the study was not a controlled experiment but rather a set of qualitative
observations about the changes in a societal group caused by a (controlled) change in the
environment (Mitra and Rana, 2001, p230). Contrary to the criticisms above, his study was
based in theory. It aimed to inductively discover if, and how, slum residents might learn
independently of any teachers. Vygotsky and Piagets social constructivist theories were
therefore applicable, wherein learning is an active process with learners experimenting and
collaborating to construct their own knowledge. The relevant paradigm was humanist critical
theory, which supports qualitative data collection and analyses techniques, due to a focus on
redressing imbalances in existing power relations, thus giving voice to the marginalised or
underprivileged, which demands some narrative to present their own subjective reality.
Therefore, using a camera to record their activity over a year provided a suitable
ethnographic approach that gave depth to understanding the meaning of this tool to its
users. Mitra also remotely monitored the PC, logging the activities it was used for, enabling
him to understand the steps that were taken on the path towards computer literacy. This data
presented a clear demonstration of learners collaborating, using trial and error approaches,

to discover how to use the PC and its applications. As well as providing empirical evidence
of learning and how it occurs, this also engaged with social constructivist theory to illuminate
the workings of this process. He concluded that underprivileged children, without any
planned instructional intervention, achieved a certain level of computer literacy (Mitra, 2001,
p230).
Thus, Mitras study overcomes numerous criticisms of ETR, based on this first study alone.
Following the study, he hypothesised that disadvantaged children in remote rural India could,
using similar methods, reach equivalent test scores of children at private urban schools. By
using the results of the first study to inform the next, the HiW project gained continuity,
something allegedly lacking previously in ETR. That HiW studies continue worldwide, more
than ten years after the initial experiment, attests to the projects longevity. The study in rural
India, undertaken with a psychologist, took a different paradigmatic approach. More
traditional logical positivist approaches aimed to provide objective evidence, taking
comparative sample groups of children and testing them to provide a quantitative
performance measurement. Again, the underlying motivation dictated the appropriate
methods. This is crucial because it links the project with education, placing it in the context of
schools, and providing results. Policymakers are keen on measurable outcomes, with
organisations like The National Knowledge Commission in India being up to increase the
countrys competitive edge in education, this is what they want (Chhapia, 2012).
By undertaking a number of related, but separate, studies, the HIW project has been able to
use different paradigmatic approaches depending on the underlying motive for each
particular study. Thus a variety of stakeholders are satisfied and the benefits of varying
paradigms reaped. The number of children in poverty with limited access to quality schooling
who have benefitted from this project can only be guessed at. The critical theory paradigm
that Hall argues is so important has given these children a voice, with several of the HiW
projects using purely qualitative measures to give depth to the lived experiences of those to
whom this learning has made such a difference. Yet other studies within the project take a
positivist approach in order to communicate sufficient evidence to policyholders and provide
empirical reports of quantitative outcomes.
There is potentially a contradiction between HiWs non-invasive learning approach and the
standardised tests of pre-defined outcomes (Arora, 2010) yet this can be construed as an
attempt to engage with stakeholders on their own terms. Other criticisms of HiWs
methodology are limited. That the project obscures larger injustices in the system,
approaching technology as value neutral, overlooking its potential clash with the

communities in which it is fitted (Halves, 2013) may be fair. Critical theorists should
collaborate with participants rather than imposing upon them. Others suggest the project
lacks emphasis on learning content, focusing instead on process (Arora, 2010). Yet only so
much can fall within the scope of one project and further broadening the emphasis might
damage the research quality.
Disagreement remains as to what defines educational technology as a discipline, if indeed it
is one, and as to what problems it should address, and how. Numerous challenges in the
field have been identified, with limited potential solutions. Whilst it has been clarified that an
in the wild approach is generally more appropriate to investigate phenomena such as social
learning, complementing this with laboratory based studies (where addressing technology
design) is also appropriate. Addressing the issue of whether to tackle research in the lab or
in the wild therefore, is just one step toward addressing many challenges. However, with a
long-term commitment to an idea, projects can overcome many of the challenges present in
such a contested field, not by dedicating a project to one paradigm, but by engaging with
whichever is appropriate to answer the questions that arise from a problem, underpinned by
theory, which is coherently developed throughout. It is possible to incorporate different
perspectives and allow paradigmatic approaches to be fluid, with boundaries that shift and
blur (Guba and Lincoln, 2005). Given the diverse interdisciplinary influences, as well as the
changing socio-political and educational landscape, constantly updating technologies and
complex debates in this relatively young field, it may not only be possible but rather
necessary to be paradigmatically adaptable in order to use a number of suitable approaches
that can satisfy stakeholders who demand traditional experimental methods to prove their
investments will be worthwhile, whilst also giving voice to marginalised learners and start to
answer what are, arguably, some of the most important questions of our time: how to use
technology as a tool for empowerment.

References
Arora, P., 2010. Hope-in-the-Wall? A digital promise for free learning. British Journal of
Educational Technology, 41(5) pp.689-702.
Bebell, D., ODwyer, L.M., Russell, M. and Hoffman, T., 2010. Concerns, considerations and
new ideas for data collection and research in educational technology studies. Journal of
Research on Technology in Education, 43(1), pp29-52.
Bennett, S. and Oliver, M., 2011. Talking back to theory: the missed opportunities in learning
technology research. Research in Learning Technology, 19(3) pp.179-189.
Borrego, M., Douglas, E.P. and Amelink, C.T., 2009. Quantitative, Qualitative, and Mixed
Research Methods in Engineering Education. The Research Journal for Engineering
Education, 98(1) pp53-66.
Chhapia, H., 2012. Indian students rank 2 nd last in global test. The Times of India [online], 15
Jan.
Clark, L., 2013. Virtual Learning Environments in teacher education: a journal, a journey.
Technology, Pedagogy and Education, 22(1), pp.121-131.
Czerniewicz, L., 2008a. The field of educational technology through a Bernsteinian lens. The
Fifth Basil Bernstein Symposium, 9-12 July 2008, Cardiff University. Available from:
www.caerdydd.ac.uk [Accessed 27 April 2013].
Czerniewicz, L., 2008b. Distinguishing the Field of Educational Technology. Electronic
Journal of Elearning, 6(3), pp.171-178.
Czerniewicz, L. 2011. Theory in learning technology. Research in Learning Technology.
19(3), pp. 173-177.
Gardener, J. and Galanouli, D., 2004. Research into information and communications
technology in education: disciplined inquiries for telling stories better. Technology, Pedagogy
and Education, 13(2), pp.147-161.

Guba, E.G. and Lincoln. Y.S. Paradigmatic Controversies, Contradictions, and Emerging
Confluences. In: N.K. Denzin and Y.S. Lincoln, eds. The SAGE Handbook of Qualitative
Research. Third Edition. London: SAGE Publications Ltd, pp.191-216.
Hall, R., 2011. Revealing the transformatory moment of learning technology: the place of
critical social theory. Research in Learning Technology, 19(3), pp.273-284.
Halves, T., 2013. Sugata Mitra on edtech and empire. The Digital Counter-Revolution, 8
March. Available from: www.digitalcounterrevolution.co.uk [Accessed 9 May 2013].
Hui-Wen Huang, H., Sampson, D., and Chen, N., 2013. Trends in Educational Technology
through the Lens of the Highly Cited Articles Published in the Journal of Educational
Technology and Society. Journal of Educational Technology and Society, 16(2) pp.3-20.
Jensen, J.J. and Skov, M.B., 2005. A Review of Research Methods in Childrens Technology
Design. Proceedings of the 4th International Conference for Interaction Design and
Children. University of Colorado. Denmark: Research Database of Aalborg University (VBN).
Available from: www.vbn-office.aau.dk [Accessed 27 March 2013].
Johri, A., 2011. The socio-materiality of learning practices and implications for the field of
learning technology. Research in Learning Technology, 19(3), pp.207-217.
Jones, M.G., 2005. Defining Educational Technology for Classroom Learning. Winthrop
University. Available from: coe.winthrop.edu [Accessed 15 March 2013].
Khan, S.H., 2012. Barriers to the introduction of ICT into education in developing countries:
the example of Bangladesh. International Journal of Instruction, 5(2) pp.61-80.
Markula, P. And Silk, M. 2011. Qualitative Research for Physical Culture. New York: Palgrave
MacMillan.
McNabb, M., Hawkes, M., & Rouk, U., 1999. Critical issues in evaluating the effectiveness
of

technology. Washington DC: US

Department

of

Education.

Available

from:

www.ed.gov/rschstat/eval/tech/techconf99/confsum.pdf [Accessed 23 March 2013].


Mitra, S, and Dangwal, R. 2010. Limits to self-organising systems of learningthe
Kalikuppam experiment. British Journal of Educational Technology (41) 5, pp 672688

Mitra, S, and Rana, V. 2001. Children and the Internet: Experiments with minimally invasive
education in India The British Journal of Educational Technology, (32) 2, pp 221-232.

Razak, F. H. A., Hafit, H., Sedi, N., Zubaidi, N. A. and Haron, H., 2010. Usability Testing with
Children: Laboratory vs Field Studies. Proceedings of International Conference on User
Science and Engineering, 13 December 2010, Selangor, Malaysia. IEEE Explorer. Available
from: ieeeexplore.ieee.org [Accessed 24 March 2013].
Roter, D. and Frankel, R.M., 1992. Quantitative and qualitative approaches to the evaluation
of medical dialogue. Social Sciences and Medicine, 34(10), pp.1097-1103.
Rushby, N. and Seabrook, J., 2008. Understanding the past, illuminating the future. British
Journal of Education Technology, 39(2), pp.198-233.
Selwyn, N., 1997. The continuing weaknesses of educational computing research. British
Journal of Educational Technology, 28(4) pp.305-307.
Underwood, J., 2004. Research into information and communications technologies: where
now? Technology, Pedagogy and Education, 13(2), pp.135-145.
Willett, R. Young Peoples Video Productions as New Sites of Learning. In: V.Carrington and
M.Robinson, eds. Digital Literacies, Social Learning and Classroom Practices. London:
SAGE Publications Ltd, pp.13-26.

You might also like