RISE Working Paper 18/025 - Testing
Examines a dataset of over 2 million students in 59 countries observed in the international PISA student achievement test 2000-2015
Abstract
School systems regularly use student assessments for accountability purposes. But, as highlighted by this conceptual model, different configurations of assessment usage generate performance-conducive incentives of different strengths for different stakeholders in different school environments.
The authors built a dataset of over 2 million students in 59 countries observed over 6 waves in the international PISA student achievement test 2000-2015. Their empirical model exploits the country panel dimension to investigate reforms in assessment systems over time, where identification comes from taking out country and year fixed effects along with a rich set of student, school, and country measures. They find that the expansion of standardized external comparisons, both school-based and student-based, is associated with improvements in student achievement. The effect of school-based comparison is stronger in countries with initially low performance. Similarly, standardized monitoring without external comparison has a positive effect in initially poorly performing countries. By contrast, the introduction of solely internal testing and internal teacher monitoring including inspectorates does not affect student achievement. Their findings point out the pitfalls of overly broad generalizations from specific country testing systems.
This work is part of the Department for International Development’s ‘Research on Improving Systems of Education’ (RISE) Programme.
Citation
Bergbauer, A., Hanushek, E. and Woessmann, L. (2018). RISE Working Paper 18/025 - Testing
Links