The firing of a D.C. teacher called “creative,” “visionary” and “motivating” is the latest example of the many things wrong with value-added methods to evaluate teachers, the newest trend in school reform that is sweeping states with a push from the Obama administration.
My colleague Bill Turque tells the story of teacher Sarah Wysocki, who was let go by D.C. public schools because her students got low standardized test scores, even though she received stellar personal evaluations as a teacher.
She was evaluated under the the D.C. teacher evaluation system, called IMPACT, a so-called “value-added” method of assessing teachers that uses complicated mathematical formulas that purport to tell how much “value” a teacher adds to how much a student learns.
One of the many profound problems with this is that the measurement for how much a student learns is a standardized test, which we know can only measure a narrow band of student achievement — and that’s only if the test is relatively well written, a student takes the exam without illness, anxiety or exhaustion, and nobody cheats.
The value-added formulas — which supposedly can factor in all of the outside variables that might affect how well a student performs on a test — are prone to so much error as to make them unreliable, according tomathematicians and other assessment experts who have warned against using these models.
Elizabeth Phillips, principal of P.S. 321 in Park Slope, N.Y., is trying to deal with the fallout from bad value-added evaluations at her school. New York City last month released value added scores for 18,000 teachers over the objections of educators in the state, and Phillips wrote that they were “extremely inaccurate, both in terms of actual mistakes and in how data are interpreted.
“It is wrong to call a great teacher a failing teacher because a few kids got 3-4 questions wrong one year rather than 2-3 questions wrong the year before,” Phillips wrote.
Wysocki found herself fired because, though her evaluations were high, her students did not score as highly as the complex value-added formula had predicted they would. She argued that this could have happened because more than half of her students’ test scores from the year earlier may have been inflated; the school they attended is now under investigation for cheating.
She was fired anyway, and now teaches in Fairfax County, one of the country’s best public school systems. Good move, D.C.
The Obama administration helped push states down the value-added road by insisting in Race to the Top requirements that student growth be included in evaluation systems. State after state seeking Race to the Top money — and even those who didn’t — jumped on the bandwagon. The arm-twisting on value-added has been so strong that even union leaders have stopped fighting for a blanket prohibition on its use and now are working to keep down the percentage of an evaluation that depends on student test scores.
We live in an era when school reformers keep talking about the importance of making decisions based on “data” — but apparently, only half-baked data will do. It may be that at some point in the future someone will figure out how to fairly and reliably use a mathematical formula to evaluate how well a teacher does his/her job, but we aren’t even close to being there yet.
So how fair is it to use such a system right now?
Not at all. One day, the folks who championed it — including administration officials who say they are concerned about fairness and equity — may well come to regret their myopia. It will be too late, though, for teachers now being smeared with this exercise in delusionary assessment.
Follow The Answer Sheet every day by bookmarkinghttp://www.washingtonpost.com/blogs/answer-sheet.
By 02:12 PM ET, 03/07/2012
|
No comments:
Post a Comment