You are on page 1of 13

This article was downloaded by: [Duke University Libraries]

On: 10 October 2014, At: 05:31


Publisher: Routledge
Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered
office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK

Peabody Journal of Education


Publication details, including instructions for authors and
subscription information:
http://www.tandfonline.com/loi/hpje20

Worldwide Growth and


Institutionalization of Statistical
Indicators for Education Policy-Making
Thomas M. Smith & David P. Baker
Published online: 22 Jun 2011.

To cite this article: Thomas M. Smith & David P. Baker (2001) Worldwide Growth and
Institutionalization of Statistical Indicators for Education Policy-Making, Peabody Journal of
Education, 76:3-4, 141-152, DOI: 10.1080/0161956X.2001.9681995

To link to this article: http://dx.doi.org/10.1080/0161956X.2001.9681995

PLEASE SCROLL DOWN FOR ARTICLE

Taylor & Francis makes every effort to ensure the accuracy of all the information (the
“Content”) contained in the publications on our platform. However, Taylor & Francis,
our agents, and our licensors make no representations or warranties whatsoever as to
the accuracy, completeness, or suitability for any purpose of the Content. Any opinions
and views expressed in this publication are the opinions and views of the authors,
and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content
should not be relied upon and should be independently verified with primary sources
of information. Taylor and Francis shall not be liable for any losses, actions, claims,
proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or
howsoever caused arising directly or indirectly in connection with, in relation to or arising
out of the use of the Content.

This article may be used for research, teaching, and private study purposes. Any
substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing,
systematic supply, or distribution in any form to anyone is expressly forbidden. Terms &
Conditions of access and use can be found at http://www.tandfonline.com/page/terms-
and-conditions
6950_Erlbaum_Peabody_Art09 9/16/02 5:43 PM Page 141

PEABODY JOURNAL OF EDUCATION, 76(3&4), 141–152


Copyright © 2001, Lawrence Erlbaum Associates, Inc.

Worldwide Growth and


Institutionalization of Statistical
Indicators for Education Policy-Making
Downloaded by [Duke University Libraries] at 05:31 10 October 2014

Thomas M. Smith
Peabody College
Vanderbilt University

David P. Baker
Pennsylvania State University

The use of widely published statistical indicators (also referred to as


social indicators when outside the purely economic realm) on the condi-
tion of national education systems has in recent decades become a stan-
dard part of the policy-making process throughout the world. Uniquely
different from the usual policy-related statistical analysis, statistical indi-
cators are derived measures, often combining multiple data sources and
several “statistics” that are uniformly developed across nations, are
repeated regularly over time, and have come to be accepted as summariz-
ing the condition of an underlying complex process. Perhaps the best-
known among all statistical indicators is the gross national product of
nations, derived from a statistical formula that attempts to summarize all

An earlier version of this article appeared under the title “International Education Statis-
tics: The Use of Indicators to Evaluate the Condition of Education Systems.” In J. Guthrie
(Ed.), Encyclopedia of education (2nd ed.). New York: Macmillan.
Requests for reprints should be sent to Thomas M. Smith, Peabody College, Box 514, Van-
derbilt University, Nashville, TN 37203.

141
6950_Erlbaum_Peabody_Art09 9/16/02 5:43 PM Page 142

T. M. Smith and D. P. Baker

of the business activity of a nation’s economy into one meaningful num-


ber. Over the past few decades, international statistical indicators of edu-
cational processes have made considerable advances in quantity, quality,
and acceptance among policymakers (Bottani & Tuijnman, 1994).
These cross-national indicators often have significant impact on the
public and the education establishment. In the United States, for example,
the New York Times gives high visibility to reports based on indicators of
where American students rank in the latest international math or science
tests or how educational expenditures per student, teacher salaries, or high
school dropout rates compare across countries. National education min-
Downloaded by [Duke University Libraries] at 05:31 10 October 2014

istries or departments frequently plan press releases to put their own spin
on indicators when statistical indicator reports such as the Organisation
for Economic Co-operation and Development’s (OECD) annual Education
at a Glance (EAG) are published. The use of indicators for comparisons and
strategic mobilization is now a regular part of educational politics. For
example, after comparing unfavorably among the 1996 EAG indicators of
teacher salaries with their Belgian and German neighbors, Dutch teachers
used these indicators to lobby for increases in their salaries. Similarly, in
the United States, comparisons of nations across a statistical indicator of
dropouts were used to highlight comparatively low high school comple-
tion rates in 2000. In an extreme, but illustrative, case, one nation’s incum-
bent political party requested that the publication of EAG be delayed until
after parliamentary elections because of the potentially damaging news
about how its education system compared to other OECD nations.
These examples of the widespread impact of international education
statistical indicators are all the more interesting considering that an earli-
er attempt to set up a system of international statistical indicators of
education almost 30 years ago failed. Early attempts by the OECD to
develop statistical indicators on education systems, known as the Social
Indicators Movement, fell apart as idealism about the utility of a technical-
functionalist approach to education planning receded in the 1970s (Bot-
tani & Tuijnman, 1994). This article reviews the development of education
statistical indicators, particularly as applied to national and cross-national
issues of education, and examines how an internationally cooperative
effort reversed the earlier pattern of failure.

History of Social Indicators

The Social Indicators Movement, born in the early 1960s, attempted to


establish a “system of social accounts” that would allow for cost–benefit
analyses of the social components of expenditures already indexed in

142
6950_Erlbaum_Peabody_Art09 9/16/02 5:43 PM Page 143

Worldwide Growth and Institutionalization of Statistical Indicators

National Income and Product Accounts (Land, 2000). Many academics


and policymakers were concerned about the “social costs” of economic
growth (Noll, n.d.), and social indicators were seen as a means to monitor
the social impacts of economic expenditures. Social indicators are defined
as time series measures “. . . used to monitor the social system, helping to
identify changes and to guide intervention to alter the course of social
change” (Ferriss, 1988, p. 601). Examples of social indicators include
unemployment rates, crime rates, estimates of life expectancy, health sta-
tus indexes such as the average number of “healthy days” in the past
month, rates of voting in elections, and measures of subjective well-being,
Downloaded by [Duke University Libraries] at 05:31 10 October 2014

as well as education measures such as school enrollment rates and


achievement test scores reported on aggregates of students such as states
and nations (Land, 2000).
Enthusiasm for social indicators led to the establishment of the Social
Science Research Council (SSRC) Center for Coordination of Research on
Social Indicators in 1972, funded by the National Science Foundation, and
the initiation of several continuing sample surveys, including the General
Social Survey (GSS) and the National Crime Survey (NCS). As reporting
mechanisms, the Census Bureau published three comprehensive social
indicators data and chart books in 1974, 1978, and 1980. The academic
community launched the international journal Social Indicators Research in
1974. Many other nations and international agencies produced indicator
volumes of their own during that time period (e.g., OECD, 1982; United
Nations, 1975). In the 1980s, however, federal funding cuts led to the dis-
continuation of numerous comprehensive national and international
social indicators activities, including closing the SSRC Center. Some have
argued that a shift away from data-based decision making toward policy
based on “conservative ideology” during the Reagan administration, cou-
pled with a large budget deficit, helped pull the financial plug on the
social indicators movement (Land, 2000). Field-specific indicators contin-
ue to be published by government agencies in areas such as education,
labor, health, crime, housing, science, and agriculture, but the systematic
public reporting envisioned in the 1960s has largely not been sustained,
although comprehensive surveys of the condition of youth have recently
arisen in both the public and private spheres (e.g., Annie E. Casey Foun-
dation, 2001; Federal Interagency Forum on Child and Family Statistics,
2001). Some of the main data collections that grew out of the social indica-
tors movement, including the GSS and NCS, continue, as do a range of
longitudinal and cross-sectional surveys in other social areas. On the aca-
demic side, a literature involving social indicators has continued to grow,
mostly focused on quality-of-life issues. Although education is seen as a
component of quality of life, it tends to be used in a fairly rudimentary

143
6950_Erlbaum_Peabody_Art09 9/16/02 5:43 PM Page 144

T. M. Smith and D. P. Baker

fashion. For example, out of 331 articles published in Social Indicators


Research between 1994 and 2000, only 26 addressed education with any
depth. Although the widely cited Human Development Index compiled
by the United Nations Development Programme has education and liter-
acy components, they are limited to basic measures of school enrollments
and, arguably, noncomparable country-level estimates of literacy rates,
which are often based on census questions about whether someone can
read or write (Heyneman, 1999; Puryear, 1995). As a subfield of social
indicators, however, the collection and reporting of education statistics
has expanded rapidly since the early 1980s in the United States, the early
Downloaded by [Duke University Libraries] at 05:31 10 October 2014

1990s in OECD countries, and more recently, in developing countries.

State of International Education Statistical Indicators Today

Among the current array of statistical indicators of education within and


across nations are some that go far beyond the basic structural characteris-
tics and resource inputs, such as student–teacher enrollment ratios and
expenditures per student, found in statistical almanacs. More data-intensive
and statistically complex indicators of participation in education, financial
investments, decision-making procedures, public attitudes toward educa-
tion, differences in curriculum and textbooks, and retention and dropout
rates in tertiary (higher) education, as well as student achievement in math,
science, reading, and civics are now standard parts of indicators reports
(OECD, 2000, 2001a). For example, the OECD summarizes total education
enrollment through an indicator on the “average years of schooling a 5-
year-old can expect under current conditions,” which is calculated by sum-
ming the net enrollment rates for each single year of age and dividing by
100 (OECD, 2000, 2001a). Unlike the gross enrollment ratios, calculated as
total enrollment in a particular level of education, regardless of age, divided
by the size of the population in the “official” age group corresponding to
that level, which have traditionally been reported in United Nations Educa-
tion, Science, and Cultural Organization (UNESCO) statistical yearbooks,
the “schooling expectancy” measures reported by OECD aggregate across
levels of education and increase comparability by weighting enrollment by
the size the population that is actually eligible to enroll. Examples of other
indicators that attempt to summarize complex issues into concise numerical
indexes include measures of individual and societal rates of return to invest-
ment in different levels of education (OECD, 1998b), measures of factors
contributing to differences in relative statutory teachers’ salary costs per
student (OECD, 1997), and “effort indexes” for education funding, which
adjust measures of public and private expenditures per student by per

144
6950_Erlbaum_Peabody_Art09 9/16/02 5:43 PM Page 145

Worldwide Growth and Institutionalization of Statistical Indicators

capita wealth (OECD, 2001a). Furthermore, the OECD is working to devel-


op assessment and reporting mechanisms to compare students’ problem-
solving skills, their ability to work in groups, and their skills in technology
use. There are few components of the world education enterprise that stat-
isticians, psychometricians, and survey methodologists are not trying to
measure and put into summary indicator forms for public consumption.
High-quality indicators require data that are accurate and routinely
available. Behind the creation of so many high-quality indicators of nation-
al education systems is the routine collection of a wide array of education
data in most industrialized countries. In addition to the costs of gathering
Downloaded by [Duke University Libraries] at 05:31 10 October 2014

data and information for national purposes, a large human and financial
investment is made to ensure that the data meet standards of comparabili-
ty across countries. Country-level experts, whether from statistical or poli-
cy branches of governments, frequently convene to discuss the kinds of
data that should be collected, to reach consensus on the most methodolog-
ically sound means of collecting the data, and to construct indicators.

Growth and Institutionalization


of International Data Collections

Numerous international organizations collect and report education


data, with the range of data types expanding and the complexity of col-
lection and analysis increasing. Hence, the total cost of these collections
has increased dramatically since 1990. Government financial support, and
in some cases control, has been a significant component of this growth in
both the sophistication of data and the scope of collections. Briefly
described here are some of the basic institutional components involved in
the creation of a set of international organizations or organizational struc-
tures that provide the institutional infrastructure for creating and sustain-
ing international education statistical indicators.

IEA

Collaboration on international assessments began as early as the late


1950s, when a group composed primarily of academics formed the Associ-
ation for the Evaluation of Educational Achievement (IEA). In 1965, 12
countries undertook the First International Mathematics Study. Since that
time, the IEA has conducted 14 different assessments, covering the topics of
mathematics, science, reading, civics, and technology. Findings from IEA’s
Second International Mathematics Study were the primary justification for
the finding in the early 1980s that our nation was “at risk.” In the 1990s,

145
6950_Erlbaum_Peabody_Art09 9/16/02 5:43 PM Page 146

T. M. Smith and D. P. Baker

government ministries of education became increasingly important for


both funding and priority setting in these studies. The results of the Third
International Mathematics and Science Study (TIMSS) were widely report-
ed in the United States and served as fuel for the latest educational reform
efforts. As governments became increasingly involved in setting the IEA
agenda, some key aspects of the “research” orientation of earlier surveys
were no longer funded (e.g., the pretest/posttest design in the Second Inter-
national Science Study), and other innovative activities were added. For
example, the TIMSS video study conducted in Germany, Japan, and the
United States applied some of the most cutting-edge research technology to
Downloaded by [Duke University Libraries] at 05:31 10 October 2014

international assessments. As part of the 1999 repeat of TIMSS (TIMSS-R),


additional countries have agreed to have their teachers videotaped, and sci-
ence classrooms have been added to the mix. Over time, the IEA assess-
ments have become more methodologically complex, with TIMSS using the
latest testing technology (IRT, multiple imputation, etc.). As the technology
behind the testing has become more complex, cross-national comparisons
of achievement have become widely accepted, and arguments that educa-
tion is culturally determined or that the tests are invalid, and thus achieve-
ment results cannot be compared across countries, has for the most part dis-
appeared (Heyneman, 1986).

OECD

The IEA has been the key innovator in the area of education assess-
ment, whereas the OECD has led the development of a cross-nationally
comparable system of education indicators. After a failed attempt to initi-
ate an ambitious system of data collection and reporting in the early
1970s, the OECD, with strong support from the United States, undertook
the development of a new system of cross-nationally comparable statisti-
cal indicators in the late 1980s (Bottani & Tuijnman, 1994). The Ministers
of Education of the OECD countries agreed at a meeting in Paris in
November 1990 that “information and data are preconditions for sound
decision making, prerequisite of accountability and informed policy
debate” (Bottani & Tuijnman, 1994, p. 25). Ministers agreed that data cur-
rently available lacked comparability and relevance to education policy.
Although led by the OECD, the core of the Indicators of Education Sys-
tems (INES) projects was the organization of four country-led develop-
mental networks: Network A on Educational Outcomes, Network B on
Student Destinations, Network C on School Features and Processes, and
Network D on Expectations and Attitudes Toward Education—led by the
United States, Sweden, the Netherlands, and the United Kingdom,

146
6950_Erlbaum_Peabody_Art09 9/16/02 5:43 PM Page 147

Worldwide Growth and Institutionalization of Statistical Indicators

respectively. The OECD Secretariat chairs a technical group on enroll-


ments, graduates, personnel, and finances. These networks, involving up
to 200 government statisticians, policymakers, and in some cases, aca-
demics, designed indicators, negotiated the definitions for data collec-
tions, and supplied data for annual reporting. This model of shared
ownership in the development of Education at a Glance, at first a biennial
and then annual publication, contributed to its success—participants in
the networks and the Technical Group invested the time needed to supply
high-quality data because they had a stake in the publication’s success.
As INES developed, it evolved from a reporting scheme where admin-
Downloaded by [Duke University Libraries] at 05:31 10 October 2014

istrative databases within countries were mined and aggregated into an


initiative that mounts its own cross-national surveys—school surveys,
public attitudes surveys, adult literacy surveys, and now surveys of
student achievement. The largest and most expensive project to date is the
OECD Programme for International Student Assessment (PISA). PISA is
an assessment of reading literacy, mathematical literacy, and scientific
literacy, jointly developed by participating countries and administered to
samples of 15-year-olds in their schools (OECD, 2001b). In 2000, PISA was
administered in 32 countries to between 4,500 and 10,000 students per
country. Expected long-term outcomes included a basic profile of knowl-
edge and skills among students at the end of compulsory schooling,
contextual indicators relating results to student and school characteristics,
and trend indicators showing how results change over time (OECD,
2001b). With the results of PISA, the OECD will be, for the first time, able
to report achievement and context indicators specifically designed for
that purpose, rather than to use IEA data (e.g., from the IEA Reading
Literacy Study or TIMSS) for country rankings and comparisons.

UNESCO

UNESCO has been the main source of cross-national data on education


since its inception near the end of the Second World War. The Universal
Declaration of Human Rights proclaimed that “Everyone has the right to
education,” that elementary and “fundamental” education should be
free, that “technical and general education should be made widely avail-
able,” and that “higher education shall be accessible to all on the basis of
merit” (United Nations, 1950, cited in UNESCO, 2000, p. 16).1 UNESCO’s
first questionnaire-based survey of education was conducted in 1950 and
1
For a detailed description of how the right to education came to be expressed in Article 26
of the Universal Declaration of Human Rights, see Appendix 1 of UNESCO (2000).

147
6950_Erlbaum_Peabody_Art09 9/16/02 5:43 PM Page 148

T. M. Smith and D. P. Baker

covered 57 of its member states (Puryear, 1995). This first “World Survey”
was described as a “situation report” to guide countries toward the goals
laid out in the Universal Declaration of Human Rights (UNESCO, 1955,
p. 13, cited in Heyneman, 1999).
In the 1960s, UNESCO was considered the “conduit for state-of-the-art-
statistical techniques” and was the most reliable source of cross-national
education information (Heyneman, 1999, p. 65). In the 1970s, UNESCO
organized the creation of the International Standard Classification of Edu-
cation (ISCED), a major step forward toward improving the comparabili-
ty of education data (Bottani & Tuijnman, 1994). The quality of education
Downloaded by [Duke University Libraries] at 05:31 10 October 2014

data collected by UNESCO eroded, however, in the 1970s and 1980s


(Heyneman, 1999). Although as many as 175 countries regularly reported
information on their education systems to UNESCO in the 1980s and
1990s, much of the data reported was widely considered unreliable
(Heyneman, 1993, 1999; Puryear, 1995). Throughout the 1990s, the prima-
ry analytical report on education published by UNESCO, The World Edu-
cation Report, has based many of its analyses and conclusions on educa-
tion data collected by agencies other than UNESCO (e.g., UNESCO, 2000).
Between 1984 and 1996, personnel and budgetary support for statistics
at UNESCO declined, and UNESCO’s ability to assist member countries
in the development of their statistical infrastructure or in the reporting of
data was severely limited (Guthrie & Hansen, 1995; Heyneman, 1993). In
the late 1990s, however, the World Bank and other international organiza-
tions, as well as influential member countries such as the Netherlands and
the United Kingdom, increased pressure and financial contributions to
improve the quality of the education data UNESCO collects. Collabora-
tion between UNESCO and OECD began on a World Bank–financed
World Education Indicators (WEI) project, which capitalized on OECD’s
experience, legitimacy, and status to expand the OECD “Indicators
Methodology” to the developing world. Although this project only
includes 18 nations (nearly 50, if you count the OECD), it has helped raise
the credibility of indicator reporting in at least some countries in the
developing world (OECD, 2001a; UNESCO & OECD, 2001). Even though
this project has in many ways “cherry-picked” countries having reason-
ably advanced national education data systems, the collaborative spirit
imported from OECD’s INES project has been quite effective. A major step
for the newly constituted UNESCO Institute for Statistics (UIS) will be to
take this project to scale. Significantly expanding WEI will be quite a
challenge, as the financial and personnel costs needed to increase both
the quality of national data collection and reporting, as well as process-
ing and indicator production on the international level, are likely to
exceed the budget and staff capacity of UIS in the short term. The visible

148
6950_Erlbaum_Peabody_Art09 9/16/02 5:43 PM Page 149

Worldwide Growth and Institutionalization of Statistical Indicators

success of the WEI project, however, shows that the interest in high-
quality, comparable education indicators expands far beyond the devel-
oped countries of the OECD.

Integration of National Resources and


Expertise Into the Process

Many of the international organizations dedicated to education data


collection were in operation well before the renaissance of the widely used
Downloaded by [Duke University Libraries] at 05:31 10 October 2014

statistical indicator in the education sector, but these groups lacked the
political power and expertise found in a number of key national govern-
ments to make them what they have recently become. A central part of the
story of international data and statistical indicators has been the thorough
integration of national governments into the process. As technocratic oper-
ations of governance, with their heavy reliance on data to measure prob-
lems and evaluate policies, have become standard in the second half of the
20th century, wealthier national governments have invested in statistical
systems and analysis capabilities. As was the case for the IEA and its mas-
sive TIMSS project, several key nations lent crucial expertise and legit-
imization of the process, which were clearly missing in earlier attempts.
Although this “partnership” has not always been a conflict-free one, it has
taken international agencies to new technical and resource levels.
As noted earlier, much of the success of the OECD education indicators
effort has been the integration of national experts, often from ministries of
education or national statistical offices, into international indicator devel-
opment teams. This has improved the quality of the data collected as well
as the national legitimacy of the data reported. A number of decentralized
states, including Canada, Spain, and the United States, have used the
international indicators produced by OECD as benchmarks for state or
provincial indicators reports. As more national governments build some
significant local control of education into national systems, this use of
international indicators at local levels will become more widespread
(Astiz, Wiseman, & Baker, 2002). In the case of Canada, the internationally
sanctioned framework provides legitimacy to a specific list of indicators
that might not have gained a sufficient level of agreement across provinces
involved (e.g., Statistics Canada and Council of Ministers of Education,
Canada, 2000). The initial release of results from the PISA project will take
this one step further, in that the OECD will provide participating countries
reports focused on their national results, in a way similar to how the
National Assessment of Educational Progress produces reports for each of
the 50 U.S. states. This reporting scheme will allow participating countries

149
6950_Erlbaum_Peabody_Art09 9/16/02 5:43 PM Page 150

T. M. Smith and D. P. Baker

to “release” their national data at the same time as the international data
are released. The same could easily occur with releases of subnational indi-
cators in conjunction with international releases. This form of simultane-
ous release is seen as an effective way to create policy debate at a number
of levels within the American system, as illustrated by the U.S. National
Center for Education Statistics’s ability to generate public interest in its
release of achievement indicators from TIMSS and TIMSS-R. International
education indicators provide constituencies within national education sys-
tems another vantage point to effect change and reform.
These internationally legitimated evaluation mechanisms provide par-
Downloaded by [Duke University Libraries] at 05:31 10 October 2014

ticipating nations with sources of external support, in the form of low


rankings, for educational reform movements that might not be popular
among national constituencies (Meyer, 1987), such as unionized teachers
or reluctant taxpayers. In fact, the disconnectedness of aggregate indica-
tors from what goes on in a particular classroom, school, or local gover-
nance allows national and subnational constituencies the ability to pick
and choose the measures that fit their local needs. A worldwide indicators
system does not imply a worldwide accountability framework, although
it provides external legitimacy for nation-states when needed, as well as
tools for reform.

Conclusions

Briefly described here are four main trends behind the massive collec-
tion of data and construction of statistical indicators cross-nationally in
the education sector over the past several decades: (a)greater coordination
and networks of organizations dedicated to international data collection,
(b) the integration of national governments’ statistical expertise and
resources into international statistical efforts that lead to statistical indica-
tors, (c) the political use of cross-national comparisons across a number of
public sectors, and (d) a near universal acceptance of the validity of statis-
tical indicators to capture central education processes. Although exam-
ples presented here of each factor focus more on elementary and sec-
ondary schooling, the same could be said for indicators of tertiary
education and more broadly defined programs designed for adults
(OECD, 1996, 1998a). The only difference is that the development of a
wide range of international statistical indicators for higher education (i.e.,
indicators of higher education systems instead of research and develop-
ment products of higher education) and lifelong learning lags behind
what has happened for elementary and secondary education, but there
are a number of signs that the higher education section will incorporate

150
6950_Erlbaum_Peabody_Art09 9/16/02 5:43 PM Page 151

Worldwide Growth and Institutionalization of Statistical Indicators

similar indicators of instruction, performance, and related processes in


the near future. For example, the OECD has invested considerable time
and effort working with its member countries to develop cross-nationally
comparable measures of public and private expenditures on tertiary edu-
cation (including subsidies to households), measures of retention of ter-
tiary students in different types of programs, and information on the pub-
lic, private, and social returns to completion of a tertiary-level degree. As
policymakers in countries where tertiary education has been traditionally
funded out of public coffers grapple with how to finance ever-expanding
enrollments, demand for structural and process indicators of tertiary
Downloaded by [Duke University Libraries] at 05:31 10 October 2014

finance continues to grow.


As school systems decentralize throughout the world, indicators serve
as a means for monitoring school equity, efficiency, and effectiveness.
Although cross-national indicators are not part of a worldwide account-
ability system per se, they do serve as effective benchmarks for compar-
ing and contrasting different models of school organization and their
associated outcomes. If the expansion of the current international statisti-
cal indicator set remains closely tied to the information needs of educa-
tion policymakers, these measures, as well as the organizations that col-
lect them, will have a wide impact on policy debates about improving
education for some time to come.

References

Annie E. Casey Foundation. (2001). Kids count data book 2001: State profiles of child well-being.
Washington DC: Center for the Study of Social Policy.
Astiz, F. M., Wiseman, A. A., & Baker, D. P. (2002). Slouching towards decentralization: Con-
sequences of globalization for curricular control in national education systems. Compara-
tive Education Review, 46, 66–88.
Bottani, N., & Tuijnman, A. (1994). International education indicators: Framework, develop-
ment and interpretation. In N. Bottani & A. Tuijnman (Eds.), Making education count:
Developing and using international indicators. Paris: Organisation for Economic Co-
operation and Development.
Ferriss, A. L. (1988). The uses of social indicators. Social Forces, 66, 601–617.
Federal Interagency Forum on Child and Family Statistics. (2001). America’s children: Key
national indicators of well-being. Washington, DC: U.S. Government Printing Office.
Guthrie, J. W., & Hansen, J. S. (Eds.). (1995). Worldwide education statistics: Enhancing
UNESCO’s role. Washington, DC: National Academy Press.
Heyneman, S. P. (1986). The search for school effects in developing countries: 1966–1986 (Seminar
Paper No. 33, Economic Development Institute). Washington, DC: The World Bank.
Heyneman, S. P. (1993). Educational quality and the crisis of educational research. Interna-
tional Review of Education, 39, 511–517.
Heyneman, S. P. (1999). The sad story of UNESCO’s education statistics. International Journal
of Educational Development, 19, 65–74.

151
6950_Erlbaum_Peabody_Art09 9/16/02 5:43 PM Page 152

T. M. Smith and D. P. Baker

Land, K. (2000). Social indicators. In E. F. Borgatta & R. V. Montgomery (Eds.), Encyclopedia of


sociology (Rev. ed.). New York: Macmillan.
Meyer, J. W. (1987). The world polity and the authority of the nation-state. In G. M. Thomas,
J. W. Meyer, F. O. Ramirez, & J. Boli (Eds.), Institutional structure: Constituting state, society,
and the individual. Newbury Park, CA: Sage.
Noll, H.-H. (n.d.). Social indicators and social reporting: The international experience. Retrieved
from the Canadian Council on Social Development Web site: www.ccsd.ca/noll1.html.
OECD. (1982). The OECD list of social indicators (OECD Social Indicator Development Pro-
gramme). Paris: Author.
OECD. (1992). High-quality education and training for all, part 2. Paris: OECD/CERI.
OECD. (1996). Lifelong learning for all. Paris: Author.
OECD. (1997). Education at a glance, OECD indicators 1997. Paris: OECD/CERI.
Downloaded by [Duke University Libraries] at 05:31 10 October 2014

OECD. (1998a). Redefining tertiary education. Paris: Author.


OECD. (1998b). Education at a glance: OECD indicators 1998. Paris: OECD/CERI.
OECD. (2000). Investing in education: Analysis of the 1999 World Education Indicators. Paris:
OECD/CERI.
OECD. (2001a). Education at a glance, OECD indicators 2001. Paris: OECD/CERI.
OECD. (2001b). Knowledge and skills for life—first result from PISA 2000. Paris: OECD/CERI.
Puryear, J. M. (1995). International education statistics and research: Status and problems.
International Journal of Educational Development, 15, 79–91.
Statistics Canada and Council of Ministers of Education, Canada. (2000). Education indicators
in Canada: Report of the Pan-Canadian Education Indicators Program 1999. Toronto: Council
of Ministers of Education, Canada.
United Nations. (1950). Universal declaration of human rights adopted and proclaimed by the Gen-
eral Assembly of the United Nations on the tenth day of December 1948, final authorized text.
New York: Author.
United Nations. (1975). Towards a system of social and demographic statistics. New York: Author.
United Nations Education, Science and Cultural Organization. (1955). World survey of educa-
tion. Handbook of educational organizations and statistics. Paris: Author.
United Nations Education, Science and Cultural Organization. (2000). World education report
2000—The right to education: Towards education for all throughout life. Paris: UNESCO Pub-
lishing.
United Nations Education, Science and Cultural Organization and the Organization for Eco-
nomic Co-operation and Development. (2001). Teachers for tomorrow’s schools analysis of the
world education indicators. Paris: UNESCO Publishing/UIS/OECD.

152

You might also like