Skip to main content

International Assessments


larry suter

international association for educational assessment
ton luijten

international association for the evaluation of educational achievement
tjeerd plomp

iea and oecd studies of reading literacy
vincent greaney
thomas kellaghan

iea study of technology in the classroom
ronald e. anderson

iea third international mathematics and science study
albert e. beaton

political democracy and the iea study of civic education
judith torney-purta
jo-ann amadeo
john schwille


International comparisons of student achievement involve assessing the knowledge of elementary and secondary school students in subjects such as mathematics, science, reading, civics, and technology. The comparisons use test items that have been standardized and agreed upon by participating countries. These complex studies have been carried out since 1959 to explicitly compare student performance among countries for students at a common age. To participate in such a comparative study, a country must demonstrate that it has had prior experience in conducting empirical studies of education.

Comparing student achievement between countries has several goals. To policymakers, country-to-country comparisons of student performance help indicate whether their educational system is performing as well as it could. To a researcher of education issues, the studies provide a basis for hypothesizing whether some policies and practices in education are necessary or sufficient for high student performance (such as requiring all teachers to obtain college degrees in the subject area they teach). To teachers and school administrators, international studies provide examples of behavior that may be a source of new forms of practice and self-evaluation.

Types of Study Results

The results of a large international study in 1995 showed that eighth-grade teachers in the United States are often not involved in decisions about the content areas of their teaching, as teachers are in other nations. U.S. teachers work longer hours than those in most other countries, they do not have as much time during the day to prepare for classes, and their daily classroom teaching is disrupted more often by things such as announcements, band practice, and scheduling changes. Moreover, the organization of curriculum used by elementary and middle schools in the United States appears not to be focused on topics that will propel students toward a more advanced understanding of mathematics. Comparisons with other countries show that U.S. students are just as interested in science and mathematics as other students, they study as long, and they watch just as much television.

Organizational History

Education researchers and policymakers from twelve countries first established a plan for making large-scale cross-national comparisons between countries on student performance in 1958 at the UNESCO Institute for Education in Hamburg, Germany. The first successful large-scale quantitative international study in mathematics was conducted in 1965 by the International Association for the Evaluation of Educational Achievement (IEA) and included Australia, Belgium, England, Finland, France, Germany, Israel, Japan, Netherlands, Scotland, Sweden, and the United States. Since then, studies in fourteen or more countries have been conducted periodically in several subject areas of elementary and secondary education.

Between 1965 and 2001 the IEA sponsored studies of mathematics in 1965, 1982, 1995, and 1999; science in 1970, 1986, 1995, and 1999; reading in 1970, 1991, and 2001; civics in 1970 and 1998; and technology in 1990 and 1999. The Educational Testing Service conducted an International Assessment for Education Progress in science and mathematics in 1990. The Adult Literacy and Lifeskills survey is a large-scale comparative survey designed to identify and measure prose literacy, numeracy, and analytical reasoning in the adult population (those between sixteen and sixty-five years of age). This survey was conducted in 1994 and 2001.

Studies such as these require the development of a set of test items, which are translated into the languages of the participating countries. The translated items are checked for proper translation and they are pretested in each country to determine whether they have misunderstandings or errors that would make the items unsuitable for use in the final study (about three times as many items are written as are finally used). The participating countries collectively agree upon a framework to define critical aspects of the topic area. For example, an elementary mathematics test would include items in numbers, geometry, algebra, functions, analysis, and measurement, and would also have items that represented different aspects of student performance, such as knowing the topic, using procedures, solving problems, reasoning, and communicating. However, no single assessment could cover comprehensively an entire topic for all countries.

The tests are administered to a sample of students in 100 to 200 schools, which are selected to represent all students in the country. An international referee monitors the school selection process to insure that all countries follow correct sampling procedures. The test items are scored according to internationally agreed-upon procedures and are analyzed at an international center to insure cross-national comparability. Countries that do not meet high standards of participation are not included in the comparisons.

Problems of Comparability

Some educators believe that learning is too elusive and culturally specific to be measured in a statistical survey. They believe that the outcomes of education are too diverse, indirect, and unpredictable to be measured in a single instrument. Others believe that comparisons are "odious" because practices that work in one culture may not be appropriate in another culture due to differences in social context and history.

The first IEA study planners were not confident that cross-national comparisons would be valid. They were concerned that the curriculum of different countries would stress different aspects of mathematics, science, or reading, and that any test of student performance might not reflect what students had been taught. To recognize national differences in teaching, the first studies measured the degree to which topics that were emphasized in the school system were actually covered. Curriculum differences were categorized as intended, implemented, or attained curriculum in order to separate the policies of the school district from classroom presentations and actual student performance. The amount of coverage of a topic became an important explanatory variable for between-school and between-country differences in achievement. The analysis showed that students in every country cover the same topics, but that they were often covered in a different order, and with a different emphasis, thus showing that international comparisons of student achievement do reflect the same content areas as other countries and thus they do make sense.

Education practices in the countries studied have been found to have more similarities than differences. The differences can be studied, however, and give important insights into which practices can be improved. International studies have helped policymakers understand that student performance is strongly determined by how schools articulate the content areas they are responsible for.

For example, a study conducted in 1965 showed significant differences in how countries approached the teaching of mathematics. Subsequent studies showed which topics of mathematics each country considered important, at what age they were introduced, and how the topics were sequenced. These studies led educators to pay closer attention to the underlying curriculum and the training of teachers in the United States. They also led to the earliest efforts by the mathematics education professionals to develop a single set of standards for mathematics teaching.

Studies of writing have had difficulty in achieving standards that permit comparison across countries. After several attempts to develop a standard set of principles for grading the writing of students across countries, the IEA gave up its efforts to evaluate writing across cultures. However, a study of reading achievement was successfully conducted in elementary and middle school grades in 1970, and studies are being conducted by the IEA and the Organisation for Economic Co-operation and Development (OECD). International studies have shown that U.S. elementary school students have a high performance level in reading compared with the rest of the participating countries, but only moderate performance at grade nine. These results indicate that U.S. students begin school with sufficient ability to read and interpret texts.

Forms of Inquiry

Comparative studies of student achievement require carefully designed statistical surveys for the statistical measurement aspect of the comparison. The populations must be defined in a common way for each country, even though definitions of a grade might differ from country to country. For example, one way to insure comparability is to select a careful sample of all students who attend whatever grade is common for fourteen-year-old students. These surveys involve students taking a test for about an hour and filling in a background questionnaire of their attitudes toward school. Teachers are asked to complete questionnaires about the curriculum topics they cover and their own professional training.

Since the 1990s studies have sometimes involved the use of videotape technology to collect information on teaching practices and student activities. For example, large national samples of mathematics classrooms were videotaped in 1995 in Japan, Germany, and the United States, and classrooms for other subjects were videotaped, in additional countries, in 2000. Videotape methods permit a more careful description of teaching practices than classroom surveys, and they provide a check on the validity of teachers' self-reporting of their practices. Detailed case studies of educational practices in several countries have also provided information about the social context in which students are taught.

International Assessments in the Twentieth Century

The first international studies were carried out by university research centers unaffiliated with government agencies. The results of those studies were published in academic journals, technical volumes, and academic books. During the 1980s these studies influenced policies in American education. Beginning in 1989 government agencies decided that they should have a larger role in organizing and supporting the studies and improving their quality. The National Center for Education Statistics (NCES), an agency of the U.S. Department of Education, and the National Science Foundation provided the leadership and funding support for creating international assessments. The U.S. National Academy of Sciences established an oversight committee called the Board on International Comparative Studies in Education to monitor the progress of these studies.

By 1995 international comparative studies had become an accepted continuing aspect of describing the status of the educational outcomes and were being carried out regularly by the NCES. Many countries originally participated in these studies in order to conduct an analysis of a single subject area in a single year. They have since shifted toward a more strategic plan to develop consistently measured trends in educational achievement with international benchmarks.

International Assessments in the Twenty-First Century

The complexity of conducting standardized comparisons of student achievement in many countries will always challenge researchers, yet they have become institutionalized in many countries. The OECD, which is based in Paris, has gained support from at least twenty-five governments for a continuing series of international comparisons of reading, mathematics, and science. These comparisons began in 2000. Also in 2000 UNESCO established the International Institute of Statistics to further institutionalize a process for improving the use of comparative statistics for policymaking.

Studies on the use of technology in schools are being developed to provide new information on forms of instructional technology that are becoming widespread in schools. Schools all over the world have introduced the use of computers and other forms of technology to classroom instruction, and studies seek to determine how educational practices are being altered by these systems.

See also: Assessment, subentry on National Assessment of Educational Progress; Standards for Student Learning; Testing, subentry on International Standards of Test Development.


Black, Paul, and Wiliam, Dylan. 1998. "Inside the Black Box: Raising Standards through Classroom Assessment." Phi Delta Kappan 80 (2):139148.

Comber, L. C., and Keeves, John P. 1973. Science Education in Nineteen Countries: An Empirical Study. Stockholm: John Wiley.

Harnqvist, Kjell. 1987. "The IEA Revisited." Comparative Education Review 31 (1):4855.

HusÉn, Torsten, ed. 1967. International Study of Achievement in Mathematics, Volume 1. New York: John Wiley.

HusÉn, Torsten. 1979. "An International Research Venture in Retrospect: The IEA Surveys." Comparative Education Review 23 (3):371385.

HusÉn, Torsten, and Postlethwaite, T. Neville, eds. 1985. The International Encyclopedia of Education Research and Studies. Oxford: Pergamon Press.

Mullis, Ina; Martin, Michael O.; Beaton, Albert E.; Gonzalez, Eugenio J.; Kelly, Danal.; and Smith, Teresa A. 1998. Mathematics and Science Achievement in the Final Year of Secondary School: IEA's Third International Mathematics and Science Study (TIMSS). Boston: Center for the Study of Testing, Evaluation, and Education Policy, Boston College.

Robitaille, David f.; Schmidt, William h.; Raizen, Senta; McKnight, Curtis; Britton, Edward; and Nicol, Cynthia. 1993. Curriculum Frameworks for Mathematics and Science. Vancouver, BC, Canada: Pacific Educational Press.

Schmidt, William H., et al. 1996. Characterizing Pedagogical Flow: An Investigation of Mathematics and Science Teaching in Six Countries. Dordrecht, Netherlands: Kluwer.

Stigler, James w.; Gonzales, Patrick a.; Kawanaka, Takako; Knoll, Steffen; and Serrano, Ana. 1999. The TIMSS Videotape Classroom Study: Methods and Findings from an Exploratory Research Project on Eighth-Grade Mathematics Instruction in Germany, Japan, and the United States. Washington, DC: National Center for Education Statistics.

Suter, Larry. 2001. "Is Student Achievement Immutable? Evidence from International Studies on Schooling and Student Achievement." Journal for the Review of Educational Research. 70 (4):529545.

Travers, Kenneth J., and Westbury, Ian. 1990. The IEA Study of Mathematics, I: Analysis of Mathematics Curricula. Oxford: Pergamon Press.

internet resources

International Organization for the Evaluation of Educational Achievement (iea).2002. <>.

National Center for Education Statistics. 2001. <>.

Larry Suter


The International Association for Educational Assessment (IAEA) was conceived as an international association of measurement agencies in 1974 at a meeting at Educational Testing Service (ETS) in Princeton, New Jersey. Later that same year a preparing committee, representing various geographic regions, met at CITO, the Institute for Educational Measurement in the Netherlands, to formulate the plans for the association.

In 1976 the United Nations Educational, Scientific and Cultural Organization (UNESCO) admitted IAEA to C (information sharing) status as a nongovernmental organization (NGO). In 1981 UNESCO admitted IAEA to B (consultative) status.

Purpose and Objectives

The broad purpose of IAEA is to assist educational agencies in the development and appropriate application of educational assessment techniques to improve the quality of education. IAEA's main objectives are to:

  • improve communication among organizations interested in educational assessment through the sharing of professional expertise, conferences, and publications, while providing a framework that includes cooperative research, training, and projects involving educational assessment;
  • make expertise in assessment techniques readily available for the solution of problems in the field of educational evaluation;
  • cooperate with other organizations and agencies having complementary interests;
  • engage in other activities leading to the improvement of assessment techniques and their appropriate use by educational agencies throughout the world.


IAEA has mainly three groups of membership: primary organizations, affiliate organizations, and individuals. Primary organization members are not-for-profit organizations, often associated in one way or another with ministries of education, which have educational assessment as their primary function. Affiliate organizations are those that make a major use of educational assessment techniques, or financial agencies that devote a large part of their budgets to work involving educational assessment. Individual members are those with a professional interest in assessment who may not be associated with an organization that has educational assessment as a primary concern. An executive committee whose officers and members are elected by the primary organization members governs IAEA. A subscription to the journal Assessment in Education: Principles, Policy and Practice is included with membership.

Activities and Projects

IAEA organizes annual conferences on assessment themes of international significance. Rotated on a geographic basis, a primary organization member in a region assumes responsibility for organizing the conference. IAEA has focused on topics such as standard setting, school-based assessment, public examinations, and admission to higher education.

In cooperation with UNESCO, IAEA organizes roundtables on the impact of assessment on education. The roundtables bring experts from designated geographic areas together to share information about topics of mutual interest, such as "The Impact of Evaluation and Assessment on Educational Policy," "The Impact of Examination Systems on Curriculum Development," and "International Comparisons of Student Achievements." Since its inception IAEA has conducted through its members a number of projects for UNESCO and the World Bank.

The executive Secretaryship of IAEA is located at CITO, The Netherlands.

See also: International Development Agencies and Education, subentry on United Nations and International Agencies.

Ton Luijten


The International Association for the Evaluation of Educational Achievement (IEA), founded in the late 1950s, conducts international comparative studies in which educational achievement is assessed in relation to student background, teacher, classroom, and school variables. At the time of the IEA's founding, there was a growing awareness among international agencies of the role of formal education in social and economical development while indicators for educational "productivity" were lacking. A group of researchers, among them Torsten Husén and his colleagues, decided in the early 1960s to undertake a first study of mathematics achievement in twelve countries to explore the feasibility of international comparative achievement studies. This first study marked the birth of the IEA.

After starting as a group of researchers, the IEA soon became a cooperative of research institutes with a primarily academic research focus. Since the early 1980s the IEA has begun to focus more specifically on the interests of policymakers, and an increasing number of member countries are represented by their ministry of education and no longer by a leading research institute. The current membership of the IEA comprises almost sixty countries from all regions of the world, and these are represented in a policymaking body, the IEA General Assembly, that is supported by the Secretariat in Amsterdam. Each IEA study is managed by three bodies: the International Coordinating Centre (ICC), which is responsible for the conduct of the research at the international level; the International Steering Committee (ISC), which monitors the quality of the research and is responsible for the general policy directions; and the International Study Committee, which consists of the National Research Coordinators (who are responsible for the study at the national level), the ISC, and the ICC.

Over the years the IEA has conducted many survey studies of basic school subjects. Most of the studies were curriculum driven and measure educational outcomes (the attained curriculum) on the basis of an analysis of the "official" curricula (the intended curricula) of the participating countries. All these studies evaluate school and classroom process variables (the implemented curriculum), as well as teacher and student background variables. Examples are the studies of mathematics and science, reading literacy, civics education, and English and French as foreign languages. The IEA also conducts studies that are not curriculum based, such as the Pre-Primary Project and the Computers in Education study. In a typical IEA study, data are collected in the third and/or fourth grade, the seventh and/or eighth grade, and the final year of secondary schooling, although some studies do not include all three populations. IEA's best-known study is the Third International Mathematics and Science Study (TIMSS), conducted between 1992 and 1999 for all three populations, in which more than forty countries participated. This study was designed to assess achievement in mathematics and science in the context of national curricula, instructional practices, and the social (and learning) environment of students.

To allow countries a longitudinal international comparative perspective, the IEA in the late 1990s initiated a basic cycle of studies in which the association studies, in alternating years, mathematics and sciences (through TIMSSnow called Trends in Mathematics and Science Study) and reading literacy (through the Progress in International Reading Literacy Study). Additionally, the IEA conducts other studies such as the Civic Education Study, which was completed in 2002, and the Second Information Technology in Education Study, which started in the fall of 1997.

Purposes and Functions of IEA Studies

IEA's mission is to enhance the quality of education. Its studies have two main purposes: (1) to provide policymakers and educational practitioners with information about the quality of their education in relation to relevant reference countries; and (2) to assist in understanding the reasons for observed differences among educational systems.

Given these purposes, the IEA strives for two kinds of comparisons in its studies. The first one consists of straight international comparisons of effects of education in terms of scores (or subscores) on international tests. The second relates to how well a country's official curriculum is implemented in the schools and achieved by students.

As a result, IEA studies have a variety of functions for educational policymakers, practitioners, and researchers:

  • describing the national results in an international context
  • analyzing the information about the status of the achievement of pupils against the results of one or more other countries or against the results in the country of interest in an earlier study ("benchmarking")
  • analyzing data to contribute to recommendations for changes when and where needed ("monitoring")
  • analyzing data with the purpose of understanding the reasons for observed performances either in a national context or within an international comparative perspective
  • promoting a general "enlightenment"that is, there is not a direct link to decisions but rather a gradual diffusion of ideas into the sphere of organizational decision-making

Many national and international reports on IEA studies illustrate the usefulness of IEA studies for educational policy and practice. For example, in Australia, Hungary, Ireland, Japan, New Zealand, and the United States, specific curriculum changes have been attributed to IEA findings.

Considerations in Planning IEA Studies

IEA studies are very complex endeavors and are conducted with attention to quality at every step of the way. The first question to be answered in planning an IEA study is "what questions do we want to address through this study." The leading research questions become very compelling because there are a variety of competing perspectives in realizing a study. In order to obtain valid and useful data and indicators, high-level scientific and technical standards have to be met for each component of the study, such as the development of a conceptual framework; determining target populations; curriculum analysis; instrument development (including pilot testing, translation, etc.); sampling; data collection, cleaning and file building; quality control in participating countries of each component; data analysis; and report writing.

Many countries participate in IEA studies (e.g., in TIMSS more than forty countries), and according to its mission the IEA aims to create opportunities for each country to conduct its own cross-national analysis in order to enhance the understanding of the functioning of its educational system at all levels. The varying interests of participating countries contribute to the dilemma between desirability and feasibility (many stakeholders want an array of data, while there are practical limitations in collecting data in schools), and in these types of studies compromises have to be found among the interests of all participating countries. The IEA aims for a design and for instruments that are as equally fair as possible to all participating countries, while also allowing for national options. For example, in the 1999 TIMSS study in South Africa, where the majority of pupils receive instruction in a language other than the home language, a language proficiency test was included to allow for investigating relationships between language and achievement.

A final point that requires careful attention is the organizational and logistical complexities of the IEA studies. For instance, the 1995 TIMSS study involved the following: achievement testing in mathematics and science in forty-five countries; five grade levels (third, fourth, seventh, eighth, and final year of secondary school); more than half a million students; testing in more than thirty languages; more than 15,000 participating schools; nearly 1,000 open-ended questions, generating millions of student responses and performance assessments; questionnaires from students, teachers, and school principals containing about 1,500 questions; and many thousands of individuals to administer the tests and process the data.


IEA studies do not lead to easy answers to complex educational problems, but they contribute to the body of knowledge of how educational systems work and of optimal conditions for teaching and learning. An example can be found in a 1992 report by T. Neville Postlethwaite and Kenneth N. Ross, who determined on the basis of cross-national analysis of the IEA Progress in International Reading Literacy Study that a large number of variables (including school, teacher, teaching, and student variables) influenced reading achievement. Their analyses illustrate how IEA studies can contribute to informed decision-making by policymakers and create an awareness of the rich variety of educational settings and approaches around the world. On the other hand, the types of studies conducted by the IEA have some limitations and also receive criticism. Technical criticisms have largely been addressed in the recent studies, and critiques have become increasingly political.

Finally, reflecting on nearly half a century of IEA activities, a number of developments have occurred and benefits have emerged. IEA studies have moved beyond simply international comparative assessment scores and contextual information. They have contributed to the national and international education community in various ways. They have provided possibilities of addressing regional issues as part of an international comparative study, linking national assessments to international assessments, and for developing countries, in particular, the opportunity to collect baseline data on education. Important benefits of the international comparative IEA studies have been the development of education research capacities in many countries, culminating in the development of a network of researchers and specialists who can be drawn from by both governments and other agencies both nationally and internationally.

See also: International Assessments, subentries on IEA and OECD Studies of Reading Literacy, IEA Study of Technology in the Classroom, IEA Third International Mathematics and Science Study, Political Democracy and the IEA Study of Civic Education.


Beaton, Albert E., et al. 1996a. Mathematics Achievement in the Middle School Years. Chestnut Hill, MA: Boston College, TIMSS International Study Center.

Beaton, Albert E., et al. 1996b. Science Achievement in the Middle School Years. Chestnut Hill, MA: Boston College, TIMSS International Study Center.

Beaton, Albert E., et al. 2000. The Benefits and Limitations of International Educational Achievement Studies. Paris: International Institute for Educational Planning/International Academy of Education.

HusÉn, Torsten. 1967. International Study of Achievement in Mathematics: A Comparison of Twelve Countries, Vols. 12. Stockholm, Sweden: Almqvist and Wiksell; New York: Wiley.

HusÉn, Torsten, and Postlethwaite, T. Neville. 1996. "A Brief History of the International Association for the Evaluation of Educational Achievement (IEA)." Assessment in Education 3:129141.

International Association for the Evaluation of Educational Achievement. 1998. IEA Guidebook, 1998: Activities, Institutions, and People. Amsterdam: IEA Secretariat.

Keeves, John P. 1995. The World of School Learning: Selected Key Findings from Thirty-Five Years of IEA Research. Amsterdam: IEA Secretariat.

Kellaghan, Thomas. 1996. "IEA Studies and Educational Policy." Assessment in Education 3:143160.

Loxley, Wendy. 1992. "Introduction to Special Volume." Prospects 22:275277.

Martin, Michael o.; Rust, Keith; and Adams, Raymond, eds. 1999. Technical Standards for IEA Studies. Amsterdam: IEA Secretariat.

Martin, M. O., et al. 2000. TIMSS, 1999: International Science Report. Chestnut Hill, MA: Boston College, Lynch School of Education, IEA TIMSS International Study Center.

Mullis, Ina V. S., et al. 2000. TIMSS, 1999: International Mathematics Report. Chestnut Hill, MA: Boston College, Lynch School of Education, IEA TIMSS International Study Center.

Pelgrum, Willem J., and Anderson, Ronald E., eds. 1999. ICT and the Emerging Paradigm for Lifelong Learning. Amsterdam: IEA Secretariat.

Postlethwaite, T. Neville, and Ross, Kenneth n. 1992. Effective Schools in Reading: Implications for Planners. Amsterdam: IEA Secretariat.

Shorrocks-Taylor, Diane, and Jenkins, Edgar W. 2000. Learning from Others. Dordrecht, Netherlands: Kluwer.

Torney-Purta, Judith; Lehmann, Rainer; Oswald, Hans; and Schulz, Wolfram. 2001. Citizenship and Education in Twenty-Eight Countries. Amsterdam: IEA Secretariat.

internet resource

International Association for the Evaluation of Educational Achievement. 2002. <>.

Tjeerd Plomp


Globalization, increased worker mobility, and competition between knowledge-based economies have led to a growth in demand for studies to measure and compare the achievement outcomes of education systems. Given its importance in students' educational development and in everyday life, it is not surprising that reading has featured in a number of these studies. Reflecting a concern with functional aspects of learning, the assessment of reading was expanded from one that mainly focused on decoding and comprehension skills to one that addressed the ability to understand and use written language forms required by society and valued by the individual.

The International Association for the Evaluation of Educational Achievement (IEA), a nongovernmental organization, pioneered international assessment studies in the early 1960s. The number of education systems (mostly in the industrialized world) participating in its reading studies increased from fifteen in 19701971 to thirty-two in 19911992 and to thirty-six in 2001. In the studies assessment instruments were developed by international panels, translated into national languages, and administered to representative samples of nine-or ten-year-old students and thirteen-or fourteen-year-old students in participating countries. A variety of correlates of reading proficiency, including students' opportunity to learn and resources for reading, were identified.

The Organisation for Economic Co-operation and Development (OECD), responding to concern among member governments about the preparedness of young people to enter society and the world of work, has supported the development of the Programme for International Student Assessment (PISA), which is designed to monitor the achievements of fifteen-year-old students in reading literacy (as well as in mathematics and science). Thirty-two countries participated in the first survey in 2000. The ability to comprehend forms of prose organized as continuous and noncontinuous text, such as lists and forms, and to retrieve and evaluate information were assessed.

In the 1990s a number of countries (eventually twenty) joined with Statistics Canada in studies, later involving OECD, to assess the ability of adults (sixteen to sixty-five years old) to understand and employ written information in daily activities at home, at work, and in the community. Reading tasks were based on text from newspapers and brochures, maps, timetables, and charts; basic arithmetic tasks were also included. Proficiency was found to be negatively related to age and, in eighteen countries, respondents' level of education was its strongest predictor.

Although international studies were initially planned to improve understanding of the educational process and to provide information relevant to policymaking and educational planning, the media have generally interpreted their findings in a competitive context, focusing on countries' relative performances, without considering the social, economic, and educational conditions that affect student learning.

See also: International Assessments, subentry on International Association for the Evaluation of Educational Achievement.


Elley, Warwick B. 1992. How in the World Do Students Read? The Hague, Netherlands: International Association for the Evaluation of Educational Achievement.

International Adult Literacy Survey. 2000. Literacy in the Information Age: Final Report of the International Adult Literacy Survey. Paris: Organisation for Economic Co-operation and Development; Ottawa, ON: Statistics Canada.

Vincent Greaney

Thomas Kellaghan


Prior to 1980 few teachers utilized information technology (IT) in the classroom. But the global diffusion of personal computers in the 1980s generated considerable interest in educational circles around the world, leading the International Association for the Evaluation of Educational Achievement (IEA) to initiate the first international comparative study of IT or computers in education. This study was named the Computers in Education Study and sometimes called Comp Ed.

IEA Computers in Education Study

Twenty-two countries participated in the first stage of the Computers in Education Study and in 1989 conducted school surveys, as documented by Pelgrum and Plomp. Surveys were conducted in elementary, lower secondary, and upper secondary schools, and within each school sample, questionnaires were completed by the principal, computer coordinator, and several teachers. In 1992 the second stage of the study repeated the surveys of the first stage and added a student assessment, according to Willem Pelgrum and colleagues and Robert E. Anderson.

An assessment was designed to measure the ability of students to generally understand and use information technology. The performance of students in this assessment depended largely on the extent to which school curricula in each country provided opportunity to learn such skills. A number of countries already had instituted an informatics curriculum at the middle-or upper-secondary levels. Perhaps the most important finding of the study was that teachers in general lacked opportunities


for the type of training that would enable them to integrate technology into their instruction.

The terminology for information technology has changed since the 1980s. Whereas information technology was called computers or IT during that decade, by the late 1990s educators in most countries referred to it as ICT to stand for the phrase information and communication technology. However, in some countries, most notably the United States, educators refer to information technology simply by the word technology.

Second IEA Study

The rapid diffusion of the Internet and multimedia technology during the mid-1990s generated an interest in a new study that among other things could investigate the changes in the curricula and classrooms since IEA's earlier study. The Second International Technology in Education Study (SITES) was initiated in 1996 by the IEA and school surveys were conducted in 1998. The SITES study consists of three modules as summarized in Table 1.

Although the study was approved by the IEA in 1996, the survey data of module 1 were collected in 1998. The module 2 case study visits to the school sites were conducted during 2000 and 2001, and the reports will be released in 2002 and 2003. Module 3 was launched in 2001, but the data for the surveys and student assessments will be collected during 2004, with the results released in 2005 and 2006. Each of the three modules will be described briefly in turn.

School Survey Module. In 1998 data were collected using a questionnaire survey of principals and one of technology coordinators or their equivalents. Twenty-six countries participated by conducting these surveys in one or more of these three school levels: primary, lower secondary, and upper secondary. As reported by Pelgrum and Anderson, this module produced findings on the following phenomena:

  • the extent to which ICT is used (and by whom) in education systems across the globe
  • the extent to which education systems have adopted, implemented, and realized the results from objectives that are considered important for education in a knowledge society
  • teaching practices that principals consider to be innovative, important, effective, and satisfying
  • existing differences in ICT-related practices both within and between education systems and what lessons can be learned from this.

The findings on school Internet access were representative of the heterogeneous pattern of cross-national adoption of new ICT practices. Figure 1 shows that while 100 percent of the schools in Singapore and Iceland had access, some countries had only about a fourth of their schools connected. Most of the other countries had connected more than 50 percent of their schools. What is so remarkable about this pattern is that even in populations that do not speak Englishthe dominant language of the Internetmost of their countries' schools had been connected and many of the students were using the Internet in school. This rapid connection of schools to the Internet occurred within only about five years or less.


Case Studies Module. Nearly thirty countries conducted in-depth case studies during the last half of 2000 and the first half of 2001. The focus of this qualitative research is innovative pedagogical practices that use technology (IPPUT). The main purposes are to understand what sustains these practices and what outcomes they produce. To accomplish this investigation, each case study describes and analyzes classroom-based processes and their contexts. These case studies are intended to provide policy analysts and teachers with examples of "model" classroom practices and offer policymakers findings regarding the contextual factors that are critical to successful implementation and sustainability of these exemplary teaching practices using ICT.

The twenty-eight countries participating in this module of SITES were Australia, Canada, Chile, China Hong Kong, Chinese Taipei, Czech Republic, Denmark, England, Finland, France, Germany, Israel, Italy, Japan, Korea, Latvia, Lithuania, the Netherlands, Norway, Philippines, Portugal, Russian Federation, Singapore, Slovakia, South Africa, Spain (Catalonia), Thailand, and the United States. As each country will conduct four to twelve case studies, the total number of cases for analysis is expected to be more than 150.

One noteworthy preliminary finding was that the students used the Internet as part of nearly every innovative practice selected. Another preliminary finding of perhaps greater importance is that the students involved in these innovative pedagogical practices often engaged in activities that could be considered "knowledge management" in that they frequently constructed knowledge products. Typically such activities were called projects and included the tasks of searching, organizing, and evaluating knowledge. For instance, Germany's first case study found that students "turned into providers of knowledge." Portugal's pilot case reported that the teachers wanted their students to be "constructors rather than receptors of mathematical knowledge." In Norway and the USA the case studies found students working collaboratively with ICT tools to complete projects yielding diverse types of knowledge.

Assessment Module. This module builds upon these findings from the leading-edge classrooms of the case studies. Specifically, the school survey, teacher survey, and student assessment will include indicators to determine the difference between the innovative and the typical learning contexts. The study will measure the ICT-supported knowledge management competencies of students, including their abilities to retrieve, organize, critically evaluate, communicate, and produce knowledge. In addition, the study will determine the readiness of schools and teachers to provide a learning environment where students can develop these abilities. In addition, this module will follow up the school survey module by having a school survey administered to principals and ICT-coordinators to measure trends of technology availability and use in schools.

All countries participating in the assessment module will study fourteen-year old students; the target population will be the grade with the most students of age fourteen. An optional population will be the grade with the most ten-year-old students. Each country will be expected to attain a sample of a minimum of 200 randomly selected schools per population. In at least 100 of these participating schools, one intact class will be sampled from all classes in the target grade. In addition to surveying the teacher of the sampled class, three additional teachers will be sampled and surveyed from those teaching the target grade.

Guiding the development of the student assessment is a framework that considers different types of knowledge management and types of tools. The categories of knowledge and tools are shown in Table 2 with the cells that illustrate sample performance tasks.

In the student assessment there will be a paper-and-pencil assessment administered to all students in the sample, and an optional Internet-based performance assessment given to four students in each class, provided that they are Internet-"literate." Students in the sampled class will be administered a survey questionnaire and a short paper-and-pencil assessment during a single class period. The assessment will include a short Internet screening test. Students will not be eligible for participation in the performance assessment unless they pass the screening test. If at least half of the students pass the Internet screening test, then the school will be eligible for participation in the performance assessment.

Despite highly diverse national educational systems around the world, almost every country has established policies regarding ICT in education. SITES in its first two modules found many different approaches across countries to the ICT challenge in education. Yet there are common threads such as widespread and rapidly growing access to the Internet. There is every reason to believe that this trend, as well as the large digital divide across countries, will continue in the early twenty-first century. It is anticipated that the assessment module with its focus on knowledge management will capture significant trends in information technology and the changing role of knowledge in society.

See also: International Assessments, subentry on International Association for the Evaluation of Educational Achievement.


Anderson, Ronald E., ed. 1993. "Computers in American Schools, 1992: An Overview." IEA Computers in Education Study. Minneapolis: University of Minnesota, Department of Sociology.

Anderson, Ronald E. 2001. "Youth and Information Technology." In The Future of Adolescent Experience: Societal Trends and the Transition to Adulthood, ed. Jeylan T. Mortimer and Reed Larson. New York: Cambridge University Press.

Pelgrum, Willem J., and Anderson, Ronald E., eds. 1999. ICT and the Emerging Paradigm for Life Long Learning: A Worldwide Educational Assessment of Infrastructure, Goals, and Practices. Amsterdam: International Association for the Evaluation of Educational Achievement.

Pelgrum, Willem j.; Janssen Reinen, I. A. M.; and Plomp, Tjeerd. 1993. Schools, Teachers, Students and Computers: A Cross-National Perspective. The Hague, Netherlands: International Association for the Educational Evaluation of Educational Achievement (IEA).

Pelgrum, Willem J., and Plomp, Tjeerd. 1991. The Use of Computers in Education Worldwide. Oxford: Pergamon.

Plomp, Tjeerd; Anderson, Robert E; and Kontogiannopoulou-Polydorides, Georgia. 1996. Cross National Policies and Practices on Computers in Education. Dordrecht, Netherlands: Kluwer.


Ronald E. Anderson


The Third International Mathematics and Science Study (TIMSS) is the largest and most ambitious educational assessment ever done under the auspices of the International Association for the Evaluation of Educational Achievement (IEA). The TIMSS data collection in 1995 involved testing more than a half-million students in more than forty educational systems (usually countries) around the world. Students were assessed at five different grade levels, and students, as well as their teachers and principals, were given questionnaires about their backgrounds, attitudes, and practices. The TIMSS data collection in 1999 focused on eighth-grade students in thirty-eight countries.

The TIMSS study resulted in many provocative findings, which are published in the TIMSS international reports. The results showed that the averages of students of participating Asian nations (Hong Kong, Japan, Korea, and Singapore) were higher than those of students of other nations in mathematics at both the elementary and middle school levels. Japan and Korea also did very well in science at these levels, although Australia, Austria, and the United States also performed highly at the elementary level, and the Czech Republic performed well at the middle school level.

Testing was also done at the end of secondary school, where a sample of the total population of students was assessed in mathematical and scientific literacy. In addition, samples of students taking advanced mathematics or physics courses were tested on those subjects. The Asian countries were not among the twenty countries that participated in this assessment. The average scores of the Netherlands, Sweden, Iceland, Norway, and Switzerland were highest in mathematical and scientific literacy; the average achievement in advanced mathematics was highest in France and Russia; and the average physics test scores were highest in Norway, Sweden, and Russia.

The TIMSS results have been widely reported in the press and in numerous public reports. All TIMSS reports, including those cited above, are available on the Internet. TIMSS's website also contains technical reports that contain the details of the TIMSS methodology, and the raw TIMSS data are available at this site for those who would like to use the data to investigate different educational questions or research methodologies.

The study was administered by the International Study Center at Boston College and by the International Coordinating Centre at the University of British Columbia. TIMSS has been funded by the participating countries along with major contributions by the Government of Canada, the National Science Foundation (U.S.), and National Center for Education Statistics (U.S.).

The Aim of TIMSS

The TIMSS was established to improve the teaching and learning of mathematics and science in school systems around the world through a comparison of the curricula and practices of different countries, and to relate this information to the performance of their students. The research questions included not only what students in the participating countries had learned, but also how the curricula varied in different countries and what facilities and opportunities were made available for students to learn what was in their curricula. The relationships of students' performance to their curricula, educational opportunities, and backgrounds were also to be investigated.

IEA Studies of Mathematics and Science

The IEA has been involved in comparing the educational systems of various countries for many years. Four previous IEA studies of mathematics or science led up to TIMSS:

  • First International Mathematics Study (FIMS), 19591960.
  • First International Science Study (FISS), 1970.
  • Second International Mathematics Study (SIMS), 19801982.
  • Second International Science Study (SISS), 19821986.

TIMSS, which included both mathematics and science, was conducted in the Northern Hemisphere in 1995 and in the Southern Hemisphere in both 1994 and 1995. A second round of TIMSS, involving only eighth-grade students, was conducted in 1999. A third assessment is planned for 2003.

TIMSS Design

The design of TIMSS grew out of discussions in the late 1980s by many researchers who were involved in SIMS, and these discussions led to the Study of Mathematics and Science Opportunity (SMSO), which explored the curricula and teaching practices of a few countries around the world and initiated the development of the tests and questionnaires for TIMSS. The final design and its instrumentation were developed and approved by the participating countries in conjunction with mathematics and science education specialists and specialists in educational assessment.

Populations and Sampling

TIMSS defined three populations of students for assessment:

  • Population 1all students enrolled in the two adjacent grades that contain the largest proportion of students nine years old at the time of testing. In most participating countries, grades three and four fit this definition.
  • Population 2all students enrolled in the two adjacent grades that contain the largest proportion of students thirteen years old at the time of testing. In most participating countries, grades seven and eight fit this definition.
  • Population 3all students in their final year of secondary education, including students in vocational education programs. In most countries, this was grade twelve. Population 3 included two subpopulations: students taking advanced courses in mathematics and students taking advanced courses in physics.

In Populations 1 and 2, an early decision was made to sample intact mathematics classrooms so that information about teachers and students could be matched and studied. The students who were not enrolled in any mathematics class were treated as a separate classroom, and thus they could be selected for the sample. In Population 3, students were not sampled by classroom, but were classified according to their mathematics and science courses, and then individually sampled for assessment. All participating countries were required to assess Population 2, but assessments of Populations 1 or 3 were optional.

TIMSS Tests and Questionnaires

The TIMSS tests were constructed using mathematics and science frameworks that were agreed upon by the participating countries and subject-matter specialists. TIMSS included multiple-choice, short answer, extended response, and performance items. Countries were not required to administer the performance items.

In order to widen the curriculum coverage of TIMSS, a form of matrix sampling was used in which students in a population received different test items, except for a few items that were common to all booklets. In Populations 1 and 2, there were eight different booklets of items administered along with student questionnaires. Teachers and school questionnaires were also administered. In Population 3, nine test booklets were administered, with the particular booklet to be used dependant on the courses in which a student was enrolled. The teacher questionnaires were omitted because intact classrooms were not sampled. The booklets in Populations 1 and 2 required about an hour of student time, whereas the booklets for Population 3 required about ninety minutes.

Administration and Quality Monitoring

TIMSS was administered by personnel from the participating countries, and the TIMSS administrators were given extensive training to assure the high quality and comparability of the TIMSS data. International quality-control monitors were hired and trained to visit the national research centers and to review their procedures. The translations were also checked centrally to detect and avoid differences in the presentation of assessment questions.

Analysis and Reporting

The basic TIMSS database was constructed in the participating countries and then given extensive statistical scrutiny at the IEA Data Processing Center in Germany. Any unusual occurrences in the database were noted and adjudicated with the participating countries. The data were then sent to Statistics Canada for a review of the sampling and construction of sampling weights. The data also went to the Australian Council for Educational Research for scale development. The scaling was done using a variation of the Rasch model. The scaled database then went to the International Study Center at Boston College for analysis and reporting.

TIMSS was a very complex study that required the cooperation of the many countries and contracting organizations involved. Cooperation was critical to assure that the tests were appropriate for all countries and that the administration of TIMSS was uniform. The coordination required many meetings in which strategy and tactics were discussed and decided. Many training sessions were required to assure that all phases of the TIMSS were carried out successfully. The flow of data from the participating countries to Germany, then Canada, Australia, and finally to the United States required careful monitoring. Finally, the final reports were designed and approved by the participating countries before the data were available. The details of the procedures are given in the TIMSS technical reports.

See also: International Assessments, subentry on International Association for the Evaluation of Educational Achievement.


Beaton, Albert e.; Martin, Michael o.; Mullis, Ina V. S.; Gonzalez, Eugenio j.; Smith, Teresa A.; and Kelly, Dana L. 1996. Mathematics Achievement in the Middle School Years: IEA's Third International Mathematics and Science Study (TIMSS). Chestnut Hill, MA: Boston College.

Beaton, Albert e.; Martin, Michael o.; Mullis, Ina v. s.; Gonzalez, Eugenio j.; Smith, Teresa A.; and Kelly, Dana L. 1996. Science Achievement in the Middle School Years: IEA's Third International Mathematics and Science Study (TIMSS). Chestnut Hill, MA: Boston College.

Harmon, Maryellen e.; Smith, Teresa a.; Martin, Michael o.; Kelly, Dana l.; Beaton, Albert e.; Mullis, Ina v.s.; Gonzalez, Eugenioj.; and Orpwood, Graham. 1997. Performance Assessment in IEA's Third International Mathematics and Science Study. Chestnut Hill, MA: Boston College.

Martin, Michael o.; Mullis, Ina v. s.; Beaton, Albert e.; Gonzalez, Eugenio j.; Smith, Teresa A.; and, Kelly, Dana L. 1997. Science Achievement in the Primary School Years: IEA's Third International Mathematics and Science Study (TIMSS). Chestnut Hill, MA: Boston College.

Martin, Michael o.; Mullis, Ina v. s.; Gonzalez, Eugenio j.; Gregory, Kelvin d.; Smith, Teresa a.; Chrostowski, Steven j.; Garden, Robert A.; and O'Connor, Kathleen M.2000. TIMSS 1999: International Science Report. Chestnut Hill, MA: International Study Center, Lynch School of Education, Boston College.

Mullis, Ina v. s.; Martin, Michael o.; Beaton, Albert e.; Gonzalez, Eugenio j.; Kelly, Dana L.; and Smith, Teresa A. 1997. Mathematics Achievement in the Primary School Years: IEA's Third International Mathematics and Science Study (TIMSS). Chestnut Hill, MA: Boston College.

Mullis, Ina v. s.; Martin, Michael o.; Beaton, Albert e.; Gonzalez, Eugenio j.; Kelly, Dana L.; and Smith, Teresa A. 1998. Mathematics and Science Achievement in the Final Year of Secondary School: IEA's Third International Mathematics and Science Study (TIMSS). Chestnut Hill, MA: Boston College.

Robitaille, David F., and Garden, Robert A., eds. 1996. TIMSS Monograph No. 2: Research Questions and Study Design. Vancouver, BC, Canada: Pacific Educational Press.

Schmidt, William h.; McKnight, Curtis C.; Valverde, Gilbert a.; Houang, Richard T.; and Wiley, David E. 1997. Many Visions, Many Aims, Volume 1: A Cross-National Investigation of Curricular Intentions in School Mathematics. Dordrecht, Netherlands: Kluwer.

internet resources

International Association for the Evaluation of Educational Achievement. 2002. <>.

Third International Mathematics and Science Study. 2002. <>.

Albert E. Beaton


In examining the contributions of education to political democracy researchers have considered shared decision-making, use of extracurricular activities to promote civic awareness, and policies designed to enhance educational equity. School curricula (especially in history, civics and government, and the social sciences/social studies) and the atmosphere of classroom discussion are also dimensions of education that contribute to students' acquisition of an understanding of and willingness to participate in political democracy. Citing empirical findings from a massive international study of civic education, evidence about these dimensions of education will be examined. The special focus is on how classroom practices contribute to what fourteen-year-old students know and believe about democratic processes and institutions.

The 1999 IEA Civic Education Study

The International Association for the Evaluation of Educational Achievement (IEA), headquartered in Amsterdam, is a consortium of research institutes and agencies in more than fifty countries. Since the late 1950s IEA has carried out nearly twenty large, cross-national studies of educational achievement in various curriculum areas. The 1999 Civic Education Study, the first IEA study in this subject area since 1971, was ambitious both in concept and in scope. About 90,000 fourteen-year-old students from twenty-eight countries as well as approximately 10,000 teachers and thousands of school principals participated in the study.

The countries participating in the test and survey of fourteen-year-olds in 1999 included Australia, Belgium (French-speaking), Bulgaria, Chile, Colombia, Cyprus, the Czech Republic, Denmark, England, Estonia, Finland, Germany, Greece, Hong Kong (SAR), Hungary, Italy, Latvia, Lithuania, Norway, Poland, Portugal, Romania, the Russian Federation, the Slovak Republic, Slovenia, Sweden, Switzerland, and the United States. Fifteen of these countries and Israel surveyed an older population of students, primarily in 2000.

Design of the IEA Civic Education Study

Through an international consensus process involving representatives from the participating countries and reflecting observations from structured national case studies conducted during the first phase of this study, three domains were identified as important topics in civic education across democracies: Democracy, Institutions, and Citizenship; National Identity and International Relations; and Social Cohesion and Diversity. Test and survey items were then written to assess students' knowledge and skills as well as attitudes in these three domains. Specifically, students were tested on their knowledge of democratic processes and institutions and their skills at interpreting political communication (e.g., interpreting the message of a political cartoon and an election leaflet). In addition, students were surveyed on their concepts of democracy and citizenship, their attitudes toward their countries and political institutions, the political rights of women and immigrants, and their expected civic participation. Background information was also collected from the students, including the activities in which they participated both in and out of school, the books available able to them at home, and their perceptions of classroom climate.

The test and survey were administered to fourteen-year-olds by national research teams in accordance with IEA technical policies and guidelines. Teachers and school principals were also surveyed. The data provide a rich and complex picture of the civic development of young adolescents and the views of their teachers.

The Importance of Classroom Climate

The extent to which students experience their classrooms as places to discuss issues and express their opinions as well as hear the opinions of their peers has been identified as a vital element of civic education. Because of its importance, a scale was developed to measure students' perceptions of the classroom climate for open discussion in the 1999 IEA Civic Education Study. Students were asked how frequently (never, rarely, sometimes, or often) they were encouraged to make up their own minds about issues, how often they felt free to disagree with their teachers about political and social issues during class, and the extent to which teachers respected student opinions and encouraged their expression during class. Students were also asked how often teachers presented several sides of an issue and whether the students felt free to express opinions even when the issues might be controversial.

The students' responses to these statements proved to be a significant predictor of both student knowledge and attitudes. For example, single level path analyses show that a democratic classroom climate where discussion takes place and teachers encourage multiple points of view was an important predictor of students' knowledge of democratic processes and institutions and skills in interpreting political communication. The only factors more closely related to knowledge were the home literacy resources available to the students and their plans for future education. Classroom climate was also positively associated with students' plans to vote as adultsan essential element of democracy. Furthermore, positive classroom climate was related to students' trust in government institutions, their confidence in school participation, and positive attitudes toward immigrants and women. In short, findings from students tested in 1999 in the IEA Civic Education Study show that when schools model democratic values by providing an open climate for discussing issues, they enhance their effectiveness in promoting students' civic knowledge and engagement.

Although open classroom climate seems to enhance democratic learning and engagement, this classroom approach is not the norm in many countries. Across the twenty-eight countries in the IEA Study, about one-third of the students reported that they were often encouraged to voice their opinions in the classroom, but an almost equal proportion said that this rarely or never occurred (especially when the issues were potentially controversial). Teacher responses confirmed the students' perceptions. They reported that teacher-centered methods of instruction, such as the use of textbooks, recitation, and worksheets were dominant in civic-related classrooms in most of the countries, although there were also opportunities for classroom discussion of issues.


Classrooms where students feel free to express their views on issues, and where multiple perspectives can be heard, seem to foster both knowledge about democratic principles and processes as well as positive attitudes toward civic engagement and the rights of others. Yet, these classroom practices are not the norm in some democratic countries. An emphasis on the transmission of factual knowledge through textbooks and worksheets seems to dominant in many (though certainly not all) classrooms. Research closely tied to the design of professional development programs could help to illuminate the ways in which classrooms might better reflect democratic practices and thereby enhance civic learning and engagement.

See also: International Assessments, subentry on International Association for the Evaluation of Educational Achievement; Social Capital and Education; Social Cohesion and Education.


Elmore, Richard F. 1990. Restructuring Schools: The Next Generation of Educational Reform. San Francisco: Jossey-Bass.

Hahn, Carole L. 1998. Becoming Political: Comparative Perspectives on Citizenship Education. Albany: State University of New York Press.

Torney, Judith; Oppenheim, Abraham N.; and Farnen, Russell F. 1975. Civic Education in Ten Countries: An Empirical Study. New York: John Wiley and Sons.

Torney-Purta, Judith. 2001. "Civic Knowledge, Beliefs about Democratic Institutions, and Civic Engagement among 14-Year-Olds." Prospects 31 (3):279292.

Torney-Purta, Judith, and Schwille, John. 2002. New Paradigms and Recurring Paradoxes in Education for Citizenship. Oxford: Elsevier Science.

Torney-Purta, Judith; Lehmann, Ranier; Oswald, Hans; and Schulz, Wolfram. 2001. Citizenship and Education in Twenty-Eight Countries: Civic Knowledge and Engagement at Age Fourteen. Amsterdam: IEA.

Torney-Purta, Judith; Schwille, John; and Amadeo, Jo-ann. 1999. Civic Education across Countries: Twenty-Four National Case Studies from the IEA Civic Education Project. Amsterdam: IEA.

Verba, Sidney; Schlozman, Kay Lehman; and Brady, Henry E. 1995. Voice and Equality: Civic Voluntarism in American Politics. Cambridge, MA and London: Harvard University Press.

Judith Torney-Purta

Jo-Ann Amadeo

John Schwille

Cite this article
Pick a style below, and copy the text for your bibliography.

  • MLA
  • Chicago
  • APA

"International Assessments." Encyclopedia of Education. . 23 Jan. 2019 <>.

"International Assessments." Encyclopedia of Education. . (January 23, 2019).

"International Assessments." Encyclopedia of Education. . Retrieved January 23, 2019 from

Learn more about citation styles

Citation styles gives you the ability to cite reference entries and articles according to common styles from the Modern Language Association (MLA), The Chicago Manual of Style, and the American Psychological Association (APA).

Within the “Cite this article” tool, pick a style to see how all available information looks when formatted according to that style. Then, copy and paste the text into your bibliography or works cited list.

Because each style has its own formatting nuances that evolve over time and not all information is available for every reference entry or article, cannot guarantee each citation it generates. Therefore, it’s best to use citations as a starting point before checking the style against your school or publication’s requirements and the most-recent information available at these sites:

Modern Language Association

The Chicago Manual of Style

American Psychological Association

  • Most online reference entries and articles do not have page numbers. Therefore, that information is unavailable for most content. However, the date of retrieval is often important. Refer to each style’s convention regarding the best way to format page numbers and retrieval dates.
  • In addition to the MLA, Chicago, and APA styles, your school, university, publication, or institution may have its own requirements for citations. Therefore, be sure to refer to those guidelines when editing your bibliography or works cited list.