The school doors have barely flung open for the new academic year and already the education profession is caught in a tempest. The Los Angeles Times has published rankings of the effectiveness of 6,000 elementary school teachers in the city's unified school district, based on standardized test scores.
"The height of journalistic irresponsibility," thundered the local teachers union in a statement denouncing the Times decision to post the database online. The unions leader called for a boycott of the paper.
I understand why a teachers union would protest. Who, after all, wants their job evaluation published on the Internet? And teachers are a favorite whipping boy of the American right wing.
But would it be more responsible to suppress information about which public school teachers are effective at their jobs? As the Times made clear, school district officials could have performed the analysis its reporters did — they've had the data for years — but they declined to do so, fearful of provoking a fight with the union. Now that the Times' analysis is out there for everybody to see, school officials and teachers are talking about how such metrics can be used responsibly.
The statistical method the Times used is known as "valued added" analysis. It tracks individual students' progress from year to year on standardized tests on math and English and correlates it to teachers. When a student outperforms expectations (based on past scores), the effect may be attributable to value added by the teacher. The performance of a single student cannot be ascribed with much certainty to the influence of the teacher alone, but with an aggregate of students this method of analysis yields a fairly reliable picture.
Standardized tests are controversial among teachers and policy experts, not least because some types of students consistently perform better on them than others. However, because value-added analysis focuses on how individual students perform over time, noting which years their performance advances or declines, the usual arguments about bias in testing don't apply. Teachers of well-heeled white students do not necessarily have a leg up on teachers whose students are poor, minority or non-native speakers.
The Times was careful to acknowledge many caveats about the limits of value-added analysis. It's good at highlighting the highest- and lowest-ranking teachers, but not for distinguishing the merit of the majority of teachers who show up in the middle. And the system can only look at the outcomes of teachers whose students are tested in their subjects annually.
The paper turned up some interesting, even counterintuitive findings. Some of the highest-ranking teachers in the study taught challenging student populations, such as immigrant children with limited English, the poor and children whose parents have little education. Decoding what those teachers are doing right is an important next step.
No one is arguing that the value-added analysis is the only way to determine which teachers are better at their jobs, or why. Nor is anyone disputing that such assessments should only be one of several metrics for evaluating or rewarding teachers.
Access to a quality public education is the bedrock of the countrys prosperity. And teachers, the Times points out, are the "single most important school-related factor in a child's education." Sparing teachers from rigorous evaluation of their effectiveness serves neither the children nor the education profession. Nor is it kind to subpar teachers, who need opportunities to learn and grow professionally (or consider other career options).
As long as its limits are understood, assessing public teachers more openly doesn't have to be a draconian, self-esteem-busting exercise that merely provides fodder for gadflies. It can be useful for teachers and administrators, and especially for parents who want the best education for their children.