Sheri G. Lederman has been teaching for 17 years as a fourth-grade teacher in New York’s Great Neck Public School district. Her students consistently outperform state averages on math and English standardized tests, and Thomas Dolan, the superintendent of Great Neck schools, signed an affidavit saying “her record is flawless” and that “she is highly regarded as an educator.”
Yet somehow, when Lederman received her 2013-14 evaluation, which is based in part on student standardized test scores, she was rated as “ineffective.” Now she has sued state officials over the method they used to make this determination in an action that could affect New York’s controversial teacher evaluation system.
How is it that a teacher known for excellence could be rated “ineffective”?
The convoluted statistical model that the state uses to evaluate how much a teacher “contributed” to students’ test scores awarded her only one out of 20 possible points. These ratings affect a teacher’s reputation and at some point are supposed to be used to determine a teacher’s pay and even job status.
The evaluation method, known as value-added modeling, or VAM, purports to be able to predict through a complicated computer model how students with similar characteristics are supposed to perform on the exams — and how much growth they are supposed to show over time — and then rate teachers on how much their students compare to the theoretical students. New York is just one of the many states where VAM is one of the chief components used to evaluate teachers.
If it sounds as if it doesn’t make a lot of sense, that’s because it doesn’t. Testing experts have for years now been warning school reformers that efforts to evaluate teachers using VAM are not reliable or valid. But reformers, including Education Secretary Arne Duncan, have embraced the method as a “data-driven” evaluation solution championed by some economists. Earlier this year, the American Statistical Association issued a report slamming the use of VAM for teacher evaluation, saying in part:
VAMs are generally based on standardized test scores and do not directly measure potential teacher contributions toward other student outcomes.
*VAMs typically measure correlation, not causation: Effects – positive or negative – attributed to a teacher may actually be caused by other factors that are not captured in the model.
Is Virginia using this model? I sure hope not. 40% of a teacher’s evaluation is based on test scores. Teachers don’t resent evaluations–they resent evaluations using bad models that don’t fairly assess their skills as educators.
If the public hears of one bad teacher, there is a hue and cry for “teacher evaluation.” There has always been teacher evaluation. Some conservatives have upped the ante on this and demanded that test scores receive more than appropriate significance and importance on teacher evaluations. That is what has happened in Virginia.
Let’s also get real. “Data-driven” is just pure bull crap in many situation. Some of teaching is an art. In fact, our best teachers are magicians, not scientists. There is no scientific way to teach an educator to teach. There has to be some native talent in there or it just doesn’t work.
I hope this woman wins her lawsuit. She has been deemed effective by her peers and her bosses. The only disagreement
*VAMs typically measure correlation, not causation: Effects – positive or negative – attributed to a teacher may actually be caused by other factors that are not captured in the model.with this assessment is theoretical kids. Now that doesn’t even make sense.
There is no way to objectively “judge” a teacher except on the extremes.
Bad teachers are obvious, as are good teachers.
Everyone in between…..competent but mediocre…… is purely subjective. The problem is trying to objective measure something that entails an ever changing input…..the students involved. THEY too have responsibility for learning.
Cargo, you don’t hear this from me often….savor the words and the moment:
STANDING OVATION.
There are lots of competent teachers out there who are above mediocre. I was always taught to think that mediocre was negative. Blame my parents. There are probably few really bad teachers and few really outstanding teachers.
@Moon-howler
I couldn’t think of another word than mediocre to handle the vast middle above and below “average,” ….since you can’t even measure “average” in such a field.
I think you bring out an excellent point about not being able to measure average in the field, only the outliers at both ends.
It’s not called “mediocre” it’s labeled as “proficient”.
Which sounds ever so much nicer.