You are on page 1of 11

The current issue and full text archive of this journal is available at

www.emeraldinsight.com/0968-4883.htm

On quality
On quality in education in education
Geoffrey D. Doherty
University of Wolverhampton, Wolverhampton, UK

255
Abstract
Purpose – The purpose of this paper is to discuss some key aspects of quality in education in the Received 10 January 2008
light of over 30 years practical experience of doing quality assurance (QA). Revised March 2008
Design/methodology/approach – Reflection on three concepts, which are still the subject of Accepted March 2008
debate, namely: “quality”; “total quality management (TQM)”; and “autonomy”.
Findings – As this is not a research paper, it presents no findings. There are some research
implications, if only to deter researchers from digging up old ground. More research into the diversity
of and interactions between cultures in academia might prove useful.
Practical implications – There are lessons to be learnt from the past. Doing quality improves
quality. Talking about it or trying to impose it does not. Managers and leaders need to reflect more
carefully than is their wont on the purposes and procedures of QA in education.
Originality/value – This paper makes a contribution to the debate about quality in education in
universities and schools and suggests that a clearer understanding across the education system of the
scope and purpose of QA, the nature of TQM and the limitations of autonomy might lead to better
embedded and more effective continuous improvement.
Keywords Education, Quality assurance, Total quality management, Quality indicators
Paper type Viewpoint

Introduction
The quality debate still rages on in academia. What a good thing this is for the quality
business in general, which nowadays provides quite a decent living for a not
inconsiderable cadre of administrators, academics and experts, and in particular for
journals like this, which has been kept in business for over 15 years. I use the language of
the market place deliberately because this highlights one of the main bones of contention.
The quality assurance (QA) methods currently used in education demonstrably derive
from industrial applications. To many academics, this is an anathema.
I would like to offer a few reflections on this debate, in which I first became
consciously involved about 30 years ago. Before I retired, I made some contributions to
this journal. For instance, I was its first guest editor – for a volume focussed on
assessment, quality and continuous improvement in higher education (HE) (QAE,
1997) and elsewhere. This is almost certainly the last paper I shall ever write and I want
to make it clear that that it is not research based. I am primarily concerned to attempt
some clarifications of contested issues and to make some suggestions as to why so
many academics are pathologically averse to what, in my opinion, they incorrectly
perceive QA to be.
What are my credentials for this exercise? I first became actively involved in quality
matters, on the one hand, as a member of several Council for National Academic
Quality Assurance in Education
Awards (CNAA) committees and working parties – the education committee; the files Vol. 16 No. 3, 2008
pp. 255-265
q Emerald Group Publishing Limited
The paper is a reflection of his long association with the quality movement in education by the 0968-4883
author, a member of the journal’s editorial board. DOI 10.1108/09684880810886268
QAE on institutions committee; and a little known but fascinating think-tank convened by
16,3 Edwin Kerr (the Chief Officer) himself, focussed on further developments in HE. On the
other, I was a curriculum developer (Assistant Director for Academic Affairs,
Crewe þ Alsager College of Higher Education) and Chair of numerous CNAA
validating panels for education and the humanities. Subsequently, I led the University
of Wolverhampton’s successful bid for British Standards (BS) 9001 registration and
256 later became a subject review leader and reports editor for the Quality Assurance
Agency (QAA). I was also a member of the British Institute of Quality Assurance’s
working party producing the UK’s contribution to the development of International
Standards Organisation (ISO) 9000. I did not become involved with QA systems purely
for the money but because, naively perhaps, I thought firstly that quality systems in
general and secondly the QAA in particular, had something positive to offer to
academia and to the educational experiences of students.
As already stated, this is not a research paper: I am neither reporting on research
nor trying to prove anything. I shall break a few “rules” – for instance, I shall not
weigh it down with references, so there may be a few unevidenced assertions and the
occasional anecdote. There will be no statistics, though I may comment on the validity
of some statistical evidence and I shall lay few offerings on the shrine of quality theory.
However, I shall structure the paper round three frequently asked, non-research
questions (NRQ), namely:
NRQ1. What do you mean by quality?
NRQ2. What do you mean by total quality management (TQM)?
NRQ3. What do you mean by autonomy?

What do you mean by quality?


There is no simple answer to this question, since “quality”, like “beauty” is subjective
– a matter of personal judgement. Some years ago I wrote a paper, rather facetiously
entitled “Can we have a unified theory of quality?” (Doherty, 1994). This explored in
some depth the difference between quality defined as “fitness to/for purpose” or the
slightly more nuanced “fitness of purpose” and quality defined as “excellence”, which
was raised, yet again, in this journal by Cartwright (2007).
The problem with “excellence” is that the concept is just as subjective as “quality”,
which is tantamount to defining quality as quality and means nothing more profound
than, “excellence is what I and like-minded others say it is”. In other words, excellence
means compliance with our or my norms:
First come I, my name is Jowett,
There’s no knowledge but I know it.
I am the Master of this College
What I don’t know isn’t knowledge. (Balliol Rhymes: anon.)
Fitness for purpose, however, requires defining the purpose and setting criteria by
which a judgement can be made. It is, of course, much easier to devise criteria for
manufacturing than education or other service industries. People are not widgets and
inappropriate, unassimilated, unimaginative attempts (of which there have been many)
to apply manufacturing methodologies to universities, colleges and schools quite
rightly raise the ire of teachers, lecturers and researchers.
Educational philosophers, psychologists and sociologists have argued the pros and On quality
cons of normative versus criterion-referenced judgements for several generations. in education
The bone of hegemony in respect of knowledge – the exercise of power over what
counts as knowledge – is very well gnawed. The same is true of quality. Such status
games are meat and drink to academics. Most players will adopt the stance that the
superiority of academic values over market values is a given, so that the application of
a market-derived methodology to academia will have negative effects – more or less 257
by definition. Cartwright (2007, p. 290) claims that, because of the QA agenda “sickness
or pathology” has “befallen” academia. Staying with this confusion of methodology
with values, one could equally argue that sociologists, Marxist literary critics et al.
have enjoyed excellent profits from the “theory” business for a couple of generations
and, as a result, “sickness or pathology” has befallen the discipline of, say, English
Literature. This is all good knock-about fun. Sadly, however, the “quality issue” is
more than an academic argument about definitions of meaning.
There is the question of who gets what from the paymaster’s limited pot and why.
Paymasters generally expect to gain satisfaction from what they are paying for.
In principle, it matters little whether the paymaster is the parent, the employer, the
student or the government. I well remember, back in the 1960s the first group of
American students to attend a six-week summer school at Alsager College of
Education. They were paying several thousand dollars for the privilege of an
introduction to Primary Education in the UK. At the end of the first week, they
presented an ultimatum to the Head of Education to the effect that, if he did not
immediately change one of their lecturers, whom they considered to be incompetent,
they would walk out and ask for their money back. Shock horror was the reaction to
such effrontery, though the replacement was rapidly forthcoming. The government is
investing not a few thousand $ but several billion pounds sterling per annum in HE, and
mind boggling more in state education in general. I have no desire to revisit the dreary
and fruitless argument about who is the customer in educational transactions. Suffice it
to say that one of the Oxford English Dictionary’s definitions of customer is: (someone)
who gives business habitually to any seller or establishment. I fail to see how that
definition does not apply to both students and government. The weasel word
“stakeholder” is generally preferred these days as it raises fewer hackles. However,
stakeholders and customers alike quite reasonably expect some means of ensuring the
value of what they are paying for – QA. Educational organisations have a diverse
range of customers (or stakeholders, clients, consumers, investors – a rose by any other
name . . .) with diverse and sometimes conflicting expectations. This does not mean that
it is impossible to implement appropriate QA methods in educational organisations.
QA at this level needs a more precise definition of quality than excellence, so we are
back to fitness – either or both “fitness to/for or of” purpose.
A word about the difference between “fitness to/for” and the “fitness of” definitions:
let us take, for instance, a first degree in subject X. In the “fitness to/for” model, the
provider defines the purpose of the degree, devises a curriculum with
objectives/learning outcomes appropriate to the attainment of the purpose and
constructs a set of assessment criteria to ensure that the student has achieved at least a
sufficient minimum of those objectives or outcomes to be awarded the degree. Internal
QA systems evaluate the success of the degree programme in attaining its objectives,
QAE devise performance indicators and make comparisons of performance over time and
16,3 across the organisation.
That is an oversimplification. There can be many different “purposes”. There is
plenty of room for debate over definitions of purpose for universities, colleges, HE,
schools and education in general. However, for any QA system to work, there has to be
a purpose. The purpose might be limited to material outcomes – as in the production of
258 widgets, or it might be wide and involve qualitative as well as quantitative outcomes –
as in the provision of learning experiences.
In the “fitness of” model, the process is the same, save that the curriculum content,
outcomes, etc. have been benchmarked against some generally agreed external criteria –
standards. Again, this is easy for widgets but considerably less easy for learning
experiences. Standards are likely to be verified by agencies external to the provider, such
as the QAA, professional bodies, or Office for Standards in Education (Ofsted).
To achieve a standard, there must be a viable specification with a set of criteria that must
be met. The QAA’s (2001) frameworks, codes of practice and benchmarks (for HE
qualifications) are examples. Once this has been done, theoretically it becomes possible
to evaluate performance across both an organisation and other organisations offering
the same qualifications. I say theoretically, because of the cherished diversity that exists
in HE (or in schools, for that matter). Nowadays, a Bachelor of Arts (BA) in, institutions
X and Y must both be benchmarked and fulfil the requirements of the framework and the
code. They may even have virtually the same curricula, but they may not, say QAA,
provide quite the same learning experiences.
One of the problems with performance indicators, like most evaluation data, is that
they are always out of date – they refer to what has been done, not what may be going
on now, or may happen in the future. There is a vast range of performance indicators
for HE, with which most readers of this journal will already be conversant. The easiest
to measure and use sensibly to compare the performance of different institutions relate
to efficiency, effectiveness and economy – e.g. staff student ratios, cost per student or
retention rates. Others, more concerned with qualitative data – e.g. student satisfaction
or degree classifications – are much more difficult to use as comparative data. The
Holy Grail of performance indicators is comparative value-added (CVA), which has
been on the agenda since the 1980s when the government and its various funding
agencies began to become seriously concerned about value for money, comparative
performance and accountability.
The Jarratt (1985) report, which strongly recommended that HE institutions should
have clear objectives and demonstrate the achievement of value for money, heralded the
application of strategic management principles to universities and colleges. Academics
have moaned about “managerialism” for the ensuing 20 years. Industrial, commercial
and service enterprises generally recognise QA as an essential management tool. Most
universities and colleges and, indeed, some schools have annual budgets of millions of
£ and certainly qualify as large businesses (over 500 employees): they have to be
“managed”. This does not mean that a university or a school or a mass education system,
for that matter, can be managed like a toothpaste factory. There is plenty to be said
about the negative effects of crass misapplication of frequently out-of-date
business management methods across the whole education system from government
(especially government) downward, but not in this paper. Suffice it to say that, in my
view, academics could more effectively aim their slings and arrows at the effectiveness
of an institution’s managers or government policies than vague concepts like On quality
“managerialism”. I well remember, being regularly warned as a QAA Review Team in education
chair, that: “This University (College, Faculty . . . whatever) is collegial not managerial!”
However, they were all “managed”, not always very effectively.
Returning to performance indicators, value-added is a can of worms. How do you
measure valued-added where people are concerned? Or in a system where one of its most
cherished characteristics is diversity? There are just too many contextual variables, 259
some of them immeasurable in numerical terms, for even the most sophisticated
statistical methods to cope with. During the late 1980s and early 1990s (and, indeed,
ever since), government and its various agencies have been attracted to the idea of using
performance data to influence funding. Some individual institutions toyed with the idea.
During the late 1980s and 1990s, the guru of performance indicators was Pettifor (1990),
the Director of The Performance Indicator Project, based in Nottingham University. His
unit (which later became an independent enterprise) published a quarterly report using
data feely available in the public sector. It used highly sophisticated statistical analysis,
but even he gave up the attempt to produce CVA between institutions. There were too
many variables to produce reliable results. They were useful, however, in comparing
value-added within a particular institution. We used them for a time at the University of
Wolverhampton, but found it an expensive way of comparing departmental value
added. Merely, eye-balling the data revealed that, for instance, although the academic
attainment of law students (measured by degree classifications) was usually excellent,
the value-added was less than, say engineering, because the A level scores of law
students were considerably higher than those of the engineers. This did not induce
students to transfer from law to engineering.
Notwithstanding these intractable problems, the Department for Children Schools
and Families in the UK is now happy to accept contextual value added (CVA) data for
schools provided by a private, charitable agency, the Fischer Foundation Trust. Their
methodology uses published progression data – standard attainment tests carried out
in schools at Levels 1-5: Level 4 uses General Certificate of Secondary Educatiion and
Level 5 A level results. These are then “contextualised” by factoring in other, social
variables. Using regression analysis, “expected performance” at Levels 3-5 is then
predicted for every child. On the basis of the information given in their Technical Paper
1 (Fischer Foundation Trust, 2007), it is not easy to judge the reliability of the
outcomes. The sample sizes are enormous: over one-and-a-half million, but they appear
to be using some moderate, or worse correlations. Moreover, the methodology assumes
that qualitative (social) variables can be accurately based on numerically measurable
data – e.g. post codes or numbers of children having free school dinners – and that the
“expected” progression of an individual child from one level to another depends on
factors that can be controlled by the school: obviously a false assumption, so the whole
process is not exact. Added to this, in any given school unpredictable poor performance
by a very small number of pupils will significantly lower the CVA. This is not to say
that such data were without any meaning at all, but that from the point of view of
inferential statistics their validity and reliability are quite dubious.
Performance indicators and value added are the basis of league tables – another
anathema to academics and schoolteachers. Even the Higher Education Statistics
Agency – HESA (2008) rejects league tables:
QAE No meaningful league tables could fairly demonstrate the performance of all higher education
institutions relative to each other. The HE sector is very diverse. Each institution has its own
16,3 distinct vision and each emphasises different aspects of higher education.

Nevertheless, they have been accepted by-and-large with delight by the national
press, such as The Times Higher Education for universities and colleges and The
260 Daily and Sunday Times Newspapers and The Daily and Sunday Telegraph
Newspapers for schools. This is presumably because they are on the one hand an
easy means of giving academics and teachers a bit of stick and on the other an
equally easy means of scoring anti-government points when institutions are failing
to deliver government targets. Governments, on the other hand, like them because
they are a means of exerting pressure on the system to deliver educational (or other,
e.g. National Health Service) policy decisions: they are a means of coercion. A
change of government would not lead to the abandonment of data-based league
tables: merely a change of emphasis. A Tory Government, for instance, might well
use social mobility data to support a change of policy to re-introduce selective
education. However, this begs the question of whether league tables are either a
satisfactory or valid measure of what students and parents perceive to be quality.
For instance, as far as schools are concerned they are palpably not so, for many
schools nowhere near the top of the league tables, particularly the CVA tables,
remain oversubscribed and I have yet to see any evidence that the only reason why
a potential student applies for a university is its league-table position. Although I
have no valid evidence, I suspect that the continued maintenance of what is
perceived as quality education in schools and universities, by students and parents
at least, depends far more on the efforts of some enlightened leaders and
practitioners than on coercion from external agencies, which generally leads to
superficial and unwilling compliance or even worse, teaching to targets.
In fairness, that quality is more than numerically-calculated outcomes was at least
recognised by the QAA in its subject review methodology in that those elements
of review dealing with curriculum content, student numbers and attainment were
regarded as related to standards, while those related to resources, student progression
and student support were regarded as quality of student experience. Even Ofsted uses a
wide range of evidence other than numerical performance indicators in arriving at its
conclusions about a school, though there is plenty of evidence that they give such data
undue weighting.
Quality, then, remains elusive, since what any individual regards as quality will
always be a subjective judgement. QA, however, is something organisations do:
a methodology for judging the degree to which macro and micro organisational aims,
objectives and outcomes have been achieved. Quite frankly, it is a management tool,
which can make an effective contribution to improving performance at the institutional
level or at a subject or departmental level within an institution. In itself, it will not
make management better or worse. The methodology may be used for negative or
positive purposes. Even unreliable instruments like “Comparative” (or “Contextual”)
value-added can be an effective tool within an institution, but attempts to make
comparisons between institutions for the purposes of coercion, league tables or funding,
need to be strongly resisted.
What do you mean by total quality management? On quality
Despite the fact that there is an enormous volume of published books and journal in education
articles on this subject ranging from the gurus of the early 1980s (Crosby, Deming.
Jurran, Ishikawa, Tagguchi, Peters and Watermen et al.) to contemporary six sigma
methodology it is still frequently misrepresented, misunderstood, or both, by many
academics. I think it was Edward Deming, who commented that he preferred the idea of
continuous improvement to TQM, because everyone understands what that means. 261
This may be so, but as a quality system, TQM implies something much more
fundamental than continuous improvement. It is a holistic management system
requiring the development of a system-wide culture. In a TQM culture, everyone,
whatever his/her role, task or position in the organisational hierarchy (and there is a
hierarchy in collegial organisations) is responsible for the management of his/her
contribution to the whole (hence “total”). Tribus (1994) wrote one of the best
introductions I know to TQM in education the early 1990s. In it, he makes clear that
philosophy and vision are as important as skills and resources and that managing a
school, college or university in which the product is learning, is not the same as
managing a factory. Developing TQM in education requires some intellectual effort and
lateral thinking, and not facile misapplication of business vocabulary and techniques.
Being person focussed, TQM aims are easily misrepresented (sentimental approach
to “empowerment of the people”) or degraded (political use as a “management tool for
exploiting the workers”). TQM principles (one of which is that quality cannot be
inspected in) underpin most contemporary quality systems: e.g. ISO 9000 series
(developed from the old British Standards Institute BS 5750 series); the requirements of
the Baldridge Award; the British Quality Foundation Model (BQFM); and The
European Foundation for Quality Management Model (EFQM) for excellence. Of these,
only ISO 9000 series is, as was BS 5750, a documented quality system for which an
institution requires registration to achieve the coveted “kite” mark – the logo, in the
shape of a kite, of the British Standards Institute. To put this in perspective, a business
friend of mine, when the University of Wolverhampton had achieved the kite mark,
said to me, “Well, that’s it then, you no longer have to waste time on government
inspections.” In order to retain the kite mark, annual audit by approved, professional
auditors is required. The others offer annual awards in various categories, of which
education is one. There are regular school winners of BQFM and EFQM awards, which
are based on self-evaluation and peer-group assessment. Nowadays, the EFQM also
offers registration to the model at different levels of “readiness”, in order to encourage
organisations to commit to continuous improvement. In the world of business,
commerce and service industries outside education, these are immensely prestigious
awards. A list of winners is a role call of internationally famous companies.
A Baldridge, BQFM or EFQM winner, is a “world class” organisation. Pretty well
everyone outside education in the UK, or anywhere else, knows this means that not
only has quality been rigorously assured but also that it is outstanding among its
peers. This is what my friend’s above remark implied.
In the UK, of course, education is subject to review or inspection by either the QAA, for
HE or Ofsted for schools. Both have been influenced by TQM. For instance, the QAA in its
current Institutional Audit looks across the institution at such elements as student
experience, progression and support, support staff, financial management, resources,
“ethos” and systems for the QA of both: the institution and its collaborators as well as
QAE academic achievement. Some years ago Lloyds TSB (2003, foreword) sponsored the
16,3 application of the EFQM to schools. They produced an excellent guide to self-assessment
in the foreword to which Charles Clarke, the then Minister of State for Education wrote:
Rigorous self-assessment lies at the heart of well-managed and successful organisations. The
EFQM Excellence Model, on which the Quality in education tool is based, is widely used
throughout both public and private sectors and as a proven track record supporting
262 continuous improvement.
He later comments that it is used within the department and its influence can certainly be
seen in the latest Ofsted methodology. However, the crucial difference between EFQM
and either QAA or Ofsted, is that there are no “inspectors”, only external “peer” review
and that only if an institution is putting itself up for an award or registration. The
“excellence model” cannot be effectively imposed from the top-down: it has to be
achieved and it can only be achieved through the commitment of the whole organisation.
The EFQM emphasises leadership (rather than management), people, processes,
results and the importance of innovation and learning – all key quality characteristics
in educational organisations. A basic tool in either TQM systems in general or EFQM
in particular, is Deming’s continuous improvement cycle. This is modified in
BQFM/EQFM methodology to the RADAR cycle. Both these are versions of what
every teacher is introduced to as lesson evaluation – refer Figures 1-3.
Since the underpinning intentions of all three approaches to continuous
improvement are shared by educators, one might think that academics and
school-teachers would be reasonably well disposed to adopting the approach: they
should be already be using it as part of their everyday teaching (and research). I have
met very few colleagues who reject the concept of evaluation as generally a good thing,
but there seems to be a barrier to extending that perception to their performance within
an institution, or between institutions for that matter. In this respect, things do not
seem to have changed very much since 1993 (Matthews, 1993):

Figure 1.
The deming cycle

Figure 2.
The EQFM RADAR cycle

Figure 3.
The teacher education
cycle
The concepts of quality and excellence are viewed as highly laudable provided they are On quality
applied to others (a variation of the “not in my backyard” philosophy).
in education
A generation on, academics are still obsessed by autonomy.

What do you mean by autonomy?


Autonomy is another favourite buzzword in academia. Like diversity it is a cherished
characteristic of the so-called academic culture. Davies et al. (2007) usefully discuss this
263
in their examination of its effect on the implementation of the EFQM in UK
universities. Notoriously, the “secret-garden-of-my-classroom” syndrome has afflicted
teachers at all levels for generations. I recommend a visit to Alan Bennett’s The History
Boys – either the film or the play – for a wonderfully comic exposition of its strengths
and weaknesses. At its best it is deeply concerned with “the conservation of a realm of
special knowledge and practice” (Davies, 2007, p. 384). At its worst, we are back in
Jowett’s world (as quoted earlier). Until relatively recently in the UK, teachers in
schools have guarded the privacy of their classroom practices – even the sticking of
“displays of work” on the windows. In universities and colleges, there have been many
idiosyncratic approaches to subject teaching. It has taken generations to open
classroom doors to peer-group observation and, generally, to sharing best practice.
Academic culture is a carpetbag term: an oversimplification. Academics tend to be
“groupy” and egalitarian. They do not like control or rules, which suggest the hated
concept of “compliance”, which is regarded as incompatible with academic freedom
and innovation (autonomy). Thus, as Davies et al. (2007, p. 386) note:
The notion of academic freedom is a potential barrier to implementation (of EFQM).
Also, there are cultures within “the” culture. Disciplines, departments and institutions
vary. Any ex-subject reviewer will testify to the differences in ethos or culture between
old, redbrick, the 1960s generation and “new” universities and ex-colleges of education.
To over-simplify again, the “new” ex-polytechnic universities were already to some
extent subject to the managerial approach, the redbricks were pragmatic, and in
different ways both the 1960s institutions and the ex-colleges of education regarded
themselves as collegiate in the style of the old universities. The same is true of
disciplines, some of which tend to be more authoritarian than others – medicine, for
instance. There are plenty of individualists – prima donnas – in academe who
certainly expect compliance with their norms: compliance is fine, so long as it means
compliance with not for me. Davies, Douglas and Douglas air these issues.
Unfortunately, the results of their research project tell us absolutely nothing that we
did not already know in 1990. Culture is a very popular concept in organisational
management theory. There is, for instance, any amount of imprecise exhortation for
organisations to develop a “quality culture”. There are well-tried quality
implementation tools that produce results. It is refreshing to work with people in
business and commerce who use them effectively and with ease, but in academia the
barriers still remain up and we do not have enough concrete evidence of why.
When we started working for BS 9001 registration for Wolverhampton University
during the early 1990s, one of the reasons (not by any means the only reason) had to do
with autonomy. The QAA was not yet a twinkle in anyone’s eye. The Higher
Education Funding Council’s Quality Division had not been set up, but it was obvious
to some of us that “quality” was on the government’s agenda. BS 5750, as it then was,
QAE offered an organisation a method first of defining the quality of its own product and
16,3 second of demonstrating it could effectively manage its sustained delivery. The whole
process at Wolverhampton was very well documented (Doherty, 1993) and I do not
propose to go over the ground again, merely to note firstly that it produced a fully
autonomous system and secondly that the mere act of reviewing processes and
procedures improved them out of all measure. This is also to some extent true of
264 writing a rigorous self-assessment. Unfortunately, a documented and audited system
like BS 9000 series is unsustainable in the context of subject review and institutional
audit: statutory requirements, which in themselves are already over exhaustive of
human and physical resources. What we were looking for was a TQM “win win”
situation, which would retain our autonomous control of a quality system – our own
QA system externally verified – that would also meet the inevitable requirements for
accountability in mass, state-funded education. In effect, once the initial effort required
to achieve registration is successful, this is a “light touch” method, because regular
audit requires no preparation. The auditors give the institution very short notice of
their visits and no idea what they will audit. On arrival, they tell the institution what
they wish to see. There are no special papers to prepare. EFQM offers a similar kind of
autonomy. It is considerably more holistic than what is now institutional review and
has much to offer in respect of institutional self-knowledge and the development of a
learning organisation. It is also internationally recognised, which institutional review
is not. However, resource issues and the cultural barrier to change still remain.
Interestingly, with increasing numbers of overseas students (customers . . .
stakeholders . . .) looking for “quality” HE in the UK, the international aspects of QA
are beginning to impact on the QAA itself. “Who reviews the reviewers?” Williams
(2007, p. 2) asks in the current edition of higher quality – a question that many of us have
asked in the past. The QAA (2007) is seeking to demonstrate that it complies with
the requirements of the European Standards and Guidelines for QA in HE. Ironically, if the
QAA had been a registered EFQM company, I suspect this would have been a mere
formality.
I remember asking Tribus, who was presenting a paper at a British Deming
Association Conference in 1992, how he would describe himself. His answer was: “As a
recovering academic . . . ”: recovering from the “academic culture”. I fear that little has
changed in the ensuing years. Education managers and academics still do not
understand that QA is something you do, not wrangle about, that to have a positive
effect it needs to be motivated by the desire of the whole organisation including
support staff and students to create the best student learning experiences, the best
research within the available resources. In The Art of War (Mair, 2007, p. 77),
written sometime between the fourth and second centuries BCE, Sun ZI (Master Sun)
advises aspirant leaders that the first factor in ensuring success is to understand
“The Way”, something that neither the generals nor the troops in education have fully
grasped . . . I leave the last word with Williams (2008, p. 19):
[. . .] the biggest challenge of all remains: how to win the hearts and minds of the ordinary
academic, how to shift the perception of quality assurance from one of external policing or
central control to one of internalised, individual, professional academic responsibility,
bringing with it the wish, intention and means to do even better by one’s students. Will this
take another ten years? – at least. Holy grails do sometimes take a while to find.
References On quality
Cartwright, M.J. (2007), “The rhetoric and reality of ‘quality’ in higher education: an investigation in education
of staff perceptions of quality in post-1992 universities”, Quality Assurance in Education,
Vol. 15 No. 3, pp. 287-301.
Davies, J., Douglas, A. and Douglas, J. (2007), “The effect of academic culture on the
implementation of the EFQM excellence model in UK universities”, Quality Assurance in
Education, Vol. 15 No. 4, pp. 382-401. 265
Doherty, G.D. (1993), “Towards total quality management in higher education: a case study of
the University of Wolverhampton”, Higher Education, Vol. 25 No. 3, pp. 321-39.
Doherty, G.D. (1994), “Can we have a unified theory of quality?”, Higher Education Quarterly,
Vol. 48 No. 4, pp. 240-55.
Fischer Foundation Trust (2007), Technical Paper 1: Estimates for “Making Good Progress”,
pp. 1-16, available at: www.fisher-foundation.com/index.html (accessed 20 March 2008).
HESA (2008), Performance Indicators 2005/6: Guide to Performance Indicators – Why not
League Tables?, Higher Education Statistics Agency, Cheltenham, available at: www.
Hesa.ac.uk (accessed 20 March 2008).
Jarratt, A. (1985), Report of the Steering Committee for Efficiency Studies in Universities, CVCP,
London.
Lloyds TSB (2003), Quality in Education: School Self-assessment using the EFQM Excellence
Model and Improvement Techniques, Lloyds TSB Group plc., London, Quality
Management Team.
Mair, V.H. (2007), Sun Zi: The Art of War: Military Methods, Columbia University Press,
New York, NY.
Matthews, W.E. (1993), “The missing element in higher education”, Quality and Participation,
January/February, pp. 102-8.
Pettifor, J. (1990), Print Out, Internal Publication within Nottingham University, Nottingham.
QAA (2001), The Framework for Higher Education Qualifications in England, Scotland, Wales
and Northern Ireland, Quality Assurance Agency for Higher Education, Gloucester.
QAA (2007), Higher Quality, Vol. 25, Quality Assurance Agency for Higher Education,
Gloucester.
QAE (1997), special issue on “Assessment, Quality and Continuous Improvement in Higher
Education”, Guest editor: Doherty, G.D., Quality Assurance in Education, Vol. 5 No. 6.
Tribus, M. (1994), “Total quality management in education: the theory and how to put it to work”,
in Doherty, G.D. (Ed.), Developing Quality Systems in Education, Routledge, London.
Williams, P. (2007), Higher Quality, Quality Assurance Agency for Higher Education, Gloucester.
Williams, P. (2008), The Times Higher Education, TSL, London, January 4.

Corresponding author
Geoffrey D. Doherty can be contacted at: geoffrey.doherty@btinternet.com

To purchase reprints of this article please e-mail: reprints@emeraldinsight.com


Or visit our web site for further details: www.emeraldinsight.com/reprints

You might also like