Centre de recherche interuniversitaire sur la formation et la profession enseignante (CRIFPE)

Votre Flash Player n'est pas à jour. Visitez le site de Adobe pour la plus récente version.

Publication

Salgado, V. (2019). Can a Test Measure Teaching Quality? Validity of Mexico’s Teacher Entry Examination After the 2013 Education Reform. Thèse de doctorat inédite, Columbia University, New York, USA.

Résumé

Mexico introduced in 2013 a historic reform amending the entry, performance assessment, promotion, incentive programs, and retention of teachers, with the aim of advancing teachers’ careers and eliminating discretional practices by the teachers union. This study analyzed Mexico’s teacher selection process following this reform and focused on the state of Puebla. It offers evidence on whether standards-based teacher evaluations, specifically the written teacher entry examinations, were a valid method for selecting competent teachers. The core component was a predictive validity study of the teacher selection method, assessing whether the teacher entry examination results predicted teacher performance evaluation results after 2 years. This was supplemented with semistructured interviews of 31 teachers and analysis of administrative documents, contextualizing the quantitative findings and offering evidence on the content of the teacher entry examination.
From the current perspective on validity, this study provides evidence on the relationship between the teacher entry examination scores and external measures collected at a later point in teachers’ careers, used as criterion validity for interpretation of the soundness of the teacher entry examination. The evidence showed that the entry examination was able to predict teacher performance, with correlation coefficients ranging from .23 to .28 between the subject-matter test and the global performance evaluation score (the other two tests were not correlated or inadequately correlated). However, this finding must be explained carefully, since the convergent evidence between the subject-matter test and the exam instrument of evaluation are possibly due to the similarity in content and method of the two measures. In this regard, the lesson plan instrument offered better evidence of an adequate correlation (.22 to .29) with the teacher entry examination (the portfolio instrument of evaluation showed no significant correlation).
Ordinary least squares (OLS) regressions showed that the teacher entry examination was one of the factors that best explained the variability in the global performance evaluation score, with 1% increase associated with a 3.8% increase in the global performance evaluation score (equivalent to 30 points). Grades were also found to be an explanatory factor, but half the size of the teacher entry examination effect. Previous teaching experience in public schools was associated with a negative effect of the same size as the entry examination effect, as well as staying in the same school during the first two years with an increase of 27 points. An adverse socioeconomic context was not necessarily unfavorable, as shown by the positive effect of the marginalization index on the performance evaluation, but teaching in lowly dense communities it was, with -42 to -92 points less.
Finally, an innovative strategy estimated the teacher selection error rates, using as validity criteria success and failure measures of predicted teacher performance. The error and severe error rates may not be exact, but the best prediction models showed an underselection error rate of 7% for the global performance evaluation score, 8% for the lesson plan score, and 14% for the portfolio score, reflecting the probability of leaving out of the teaching career promising teachers. They also showed that the overselection error rate was 12% for the global performance evaluation score, 13% for the lesson plan score, and 14% for the portfolio score, describing the probability of selecting underperforming teachers, which was the worst of outcomes.
In light of this evidence, the sample studied shows that results in Mexico’s teacher entry examination were associated with the subsequent performance evaluation. However, conceptually, a test can hardly predict teaching quality, since a test captures individuals’ knowledge, while teaching quality is a much richer concept, approximated by the concept of effective teaching and teacher effectiveness, and including observable and unobservable characteristics, and contributions to education outcomes other than learning outcomes. This means that the performance evaluation in Mexico was not necessarily a measure of effective teaching nor of effective teachers, but showed teachers’ pedagogical and subject-matter knowledge, abilities to build a lesson plan, and skills to assess and select student work from different achievement levels. The most obvious information missing was teachers’ practices, as captured through classroom observations.
Despite the difficulty of a test to measure teaching quality, and the difficulties in implementing a nation-wide education reform, the study conducted produced rigorous, scientific, and objective evidence that demonstrates that Mexico’s teacher entry examination is a robust method to select teachers, providing useful information on teacher performance when making a hiring decision. The most important implication is that it may guarantee the selection of quality teachers, if some corrections are made, in order to avoid selecting underperforming teachers and leaving promising candidates out of the teaching career.

Lien

https://academiccommons.columbia.edu/doi/10.7916/d8-sk36-6p06
Logos des universités associées au CRIFPE

Adresse civique

Université de Montréal
Faculté des Sciences de l'Éducation
CRIFPE
90, avenue Vincent d'Indy
Pavillon Marie-Victorin – C-536
Outremont (Québec) H2V 2S9

Adresse postale

Université de Montréal
Faculté des Sciences de l'Éducation
CRIFPE – C-543
C.P. 6128, succursale Centre-ville
Montréal (Québec) H3C 3J7