Professional Documents
Culture Documents
To cite this article: Thomas M. Smith & David P. Baker (2001) Worldwide Growth and
Institutionalization of Statistical Indicators for Education Policy-Making, Peabody Journal of
Education, 76:3-4, 141-152, DOI: 10.1080/0161956X.2001.9681995
Taylor & Francis makes every effort to ensure the accuracy of all the information (the
“Content”) contained in the publications on our platform. However, Taylor & Francis,
our agents, and our licensors make no representations or warranties whatsoever as to
the accuracy, completeness, or suitability for any purpose of the Content. Any opinions
and views expressed in this publication are the opinions and views of the authors,
and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content
should not be relied upon and should be independently verified with primary sources
of information. Taylor and Francis shall not be liable for any losses, actions, claims,
proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or
howsoever caused arising directly or indirectly in connection with, in relation to or arising
out of the use of the Content.
This article may be used for research, teaching, and private study purposes. Any
substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing,
systematic supply, or distribution in any form to anyone is expressly forbidden. Terms &
Conditions of access and use can be found at http://www.tandfonline.com/page/terms-
and-conditions
6950_Erlbaum_Peabody_Art09 9/16/02 5:43 PM Page 141
Thomas M. Smith
Peabody College
Vanderbilt University
David P. Baker
Pennsylvania State University
An earlier version of this article appeared under the title “International Education Statis-
tics: The Use of Indicators to Evaluate the Condition of Education Systems.” In J. Guthrie
(Ed.), Encyclopedia of education (2nd ed.). New York: Macmillan.
Requests for reprints should be sent to Thomas M. Smith, Peabody College, Box 514, Van-
derbilt University, Nashville, TN 37203.
141
6950_Erlbaum_Peabody_Art09 9/16/02 5:43 PM Page 142
istries or departments frequently plan press releases to put their own spin
on indicators when statistical indicator reports such as the Organisation
for Economic Co-operation and Development’s (OECD) annual Education
at a Glance (EAG) are published. The use of indicators for comparisons and
strategic mobilization is now a regular part of educational politics. For
example, after comparing unfavorably among the 1996 EAG indicators of
teacher salaries with their Belgian and German neighbors, Dutch teachers
used these indicators to lobby for increases in their salaries. Similarly, in
the United States, comparisons of nations across a statistical indicator of
dropouts were used to highlight comparatively low high school comple-
tion rates in 2000. In an extreme, but illustrative, case, one nation’s incum-
bent political party requested that the publication of EAG be delayed until
after parliamentary elections because of the potentially damaging news
about how its education system compared to other OECD nations.
These examples of the widespread impact of international education
statistical indicators are all the more interesting considering that an earli-
er attempt to set up a system of international statistical indicators of
education almost 30 years ago failed. Early attempts by the OECD to
develop statistical indicators on education systems, known as the Social
Indicators Movement, fell apart as idealism about the utility of a technical-
functionalist approach to education planning receded in the 1970s (Bot-
tani & Tuijnman, 1994). This article reviews the development of education
statistical indicators, particularly as applied to national and cross-national
issues of education, and examines how an internationally cooperative
effort reversed the earlier pattern of failure.
142
6950_Erlbaum_Peabody_Art09 9/16/02 5:43 PM Page 143
143
6950_Erlbaum_Peabody_Art09 9/16/02 5:43 PM Page 144
144
6950_Erlbaum_Peabody_Art09 9/16/02 5:43 PM Page 145
data and information for national purposes, a large human and financial
investment is made to ensure that the data meet standards of comparabili-
ty across countries. Country-level experts, whether from statistical or poli-
cy branches of governments, frequently convene to discuss the kinds of
data that should be collected, to reach consensus on the most methodolog-
ically sound means of collecting the data, and to construct indicators.
IEA
145
6950_Erlbaum_Peabody_Art09 9/16/02 5:43 PM Page 146
OECD
The IEA has been the key innovator in the area of education assess-
ment, whereas the OECD has led the development of a cross-nationally
comparable system of education indicators. After a failed attempt to initi-
ate an ambitious system of data collection and reporting in the early
1970s, the OECD, with strong support from the United States, undertook
the development of a new system of cross-nationally comparable statisti-
cal indicators in the late 1980s (Bottani & Tuijnman, 1994). The Ministers
of Education of the OECD countries agreed at a meeting in Paris in
November 1990 that “information and data are preconditions for sound
decision making, prerequisite of accountability and informed policy
debate” (Bottani & Tuijnman, 1994, p. 25). Ministers agreed that data cur-
rently available lacked comparability and relevance to education policy.
Although led by the OECD, the core of the Indicators of Education Sys-
tems (INES) projects was the organization of four country-led develop-
mental networks: Network A on Educational Outcomes, Network B on
Student Destinations, Network C on School Features and Processes, and
Network D on Expectations and Attitudes Toward Education—led by the
United States, Sweden, the Netherlands, and the United Kingdom,
146
6950_Erlbaum_Peabody_Art09 9/16/02 5:43 PM Page 147
UNESCO
147
6950_Erlbaum_Peabody_Art09 9/16/02 5:43 PM Page 148
covered 57 of its member states (Puryear, 1995). This first “World Survey”
was described as a “situation report” to guide countries toward the goals
laid out in the Universal Declaration of Human Rights (UNESCO, 1955,
p. 13, cited in Heyneman, 1999).
In the 1960s, UNESCO was considered the “conduit for state-of-the-art-
statistical techniques” and was the most reliable source of cross-national
education information (Heyneman, 1999, p. 65). In the 1970s, UNESCO
organized the creation of the International Standard Classification of Edu-
cation (ISCED), a major step forward toward improving the comparabili-
ty of education data (Bottani & Tuijnman, 1994). The quality of education
Downloaded by [Duke University Libraries] at 05:31 10 October 2014
148
6950_Erlbaum_Peabody_Art09 9/16/02 5:43 PM Page 149
success of the WEI project, however, shows that the interest in high-
quality, comparable education indicators expands far beyond the devel-
oped countries of the OECD.
statistical indicator in the education sector, but these groups lacked the
political power and expertise found in a number of key national govern-
ments to make them what they have recently become. A central part of the
story of international data and statistical indicators has been the thorough
integration of national governments into the process. As technocratic oper-
ations of governance, with their heavy reliance on data to measure prob-
lems and evaluate policies, have become standard in the second half of the
20th century, wealthier national governments have invested in statistical
systems and analysis capabilities. As was the case for the IEA and its mas-
sive TIMSS project, several key nations lent crucial expertise and legit-
imization of the process, which were clearly missing in earlier attempts.
Although this “partnership” has not always been a conflict-free one, it has
taken international agencies to new technical and resource levels.
As noted earlier, much of the success of the OECD education indicators
effort has been the integration of national experts, often from ministries of
education or national statistical offices, into international indicator devel-
opment teams. This has improved the quality of the data collected as well
as the national legitimacy of the data reported. A number of decentralized
states, including Canada, Spain, and the United States, have used the
international indicators produced by OECD as benchmarks for state or
provincial indicators reports. As more national governments build some
significant local control of education into national systems, this use of
international indicators at local levels will become more widespread
(Astiz, Wiseman, & Baker, 2002). In the case of Canada, the internationally
sanctioned framework provides legitimacy to a specific list of indicators
that might not have gained a sufficient level of agreement across provinces
involved (e.g., Statistics Canada and Council of Ministers of Education,
Canada, 2000). The initial release of results from the PISA project will take
this one step further, in that the OECD will provide participating countries
reports focused on their national results, in a way similar to how the
National Assessment of Educational Progress produces reports for each of
the 50 U.S. states. This reporting scheme will allow participating countries
149
6950_Erlbaum_Peabody_Art09 9/16/02 5:43 PM Page 150
to “release” their national data at the same time as the international data
are released. The same could easily occur with releases of subnational indi-
cators in conjunction with international releases. This form of simultane-
ous release is seen as an effective way to create policy debate at a number
of levels within the American system, as illustrated by the U.S. National
Center for Education Statistics’s ability to generate public interest in its
release of achievement indicators from TIMSS and TIMSS-R. International
education indicators provide constituencies within national education sys-
tems another vantage point to effect change and reform.
These internationally legitimated evaluation mechanisms provide par-
Downloaded by [Duke University Libraries] at 05:31 10 October 2014
Conclusions
Briefly described here are four main trends behind the massive collec-
tion of data and construction of statistical indicators cross-nationally in
the education sector over the past several decades: (a)greater coordination
and networks of organizations dedicated to international data collection,
(b) the integration of national governments’ statistical expertise and
resources into international statistical efforts that lead to statistical indica-
tors, (c) the political use of cross-national comparisons across a number of
public sectors, and (d) a near universal acceptance of the validity of statis-
tical indicators to capture central education processes. Although exam-
ples presented here of each factor focus more on elementary and sec-
ondary schooling, the same could be said for indicators of tertiary
education and more broadly defined programs designed for adults
(OECD, 1996, 1998a). The only difference is that the development of a
wide range of international statistical indicators for higher education (i.e.,
indicators of higher education systems instead of research and develop-
ment products of higher education) and lifelong learning lags behind
what has happened for elementary and secondary education, but there
are a number of signs that the higher education section will incorporate
150
6950_Erlbaum_Peabody_Art09 9/16/02 5:43 PM Page 151
References
Annie E. Casey Foundation. (2001). Kids count data book 2001: State profiles of child well-being.
Washington DC: Center for the Study of Social Policy.
Astiz, F. M., Wiseman, A. A., & Baker, D. P. (2002). Slouching towards decentralization: Con-
sequences of globalization for curricular control in national education systems. Compara-
tive Education Review, 46, 66–88.
Bottani, N., & Tuijnman, A. (1994). International education indicators: Framework, develop-
ment and interpretation. In N. Bottani & A. Tuijnman (Eds.), Making education count:
Developing and using international indicators. Paris: Organisation for Economic Co-
operation and Development.
Ferriss, A. L. (1988). The uses of social indicators. Social Forces, 66, 601–617.
Federal Interagency Forum on Child and Family Statistics. (2001). America’s children: Key
national indicators of well-being. Washington, DC: U.S. Government Printing Office.
Guthrie, J. W., & Hansen, J. S. (Eds.). (1995). Worldwide education statistics: Enhancing
UNESCO’s role. Washington, DC: National Academy Press.
Heyneman, S. P. (1986). The search for school effects in developing countries: 1966–1986 (Seminar
Paper No. 33, Economic Development Institute). Washington, DC: The World Bank.
Heyneman, S. P. (1993). Educational quality and the crisis of educational research. Interna-
tional Review of Education, 39, 511–517.
Heyneman, S. P. (1999). The sad story of UNESCO’s education statistics. International Journal
of Educational Development, 19, 65–74.
151
6950_Erlbaum_Peabody_Art09 9/16/02 5:43 PM Page 152
152