September 28, 2016 Reading Time: 3 minutes

In light of my recent research brief about teacher value-added models, which estimate teachers’ contribution to student achievement, I was interested to read a recent Atlantic article by E.D. Hirsch.

Prof. Hirsch has been an influential voice in education reform for decades, and from the article’s title, “Don’t Blame the Teachers,” one can guess that he is not a fan of value-added models’ use in educational policy. His writing has always focused on cultural literacy and the use of a specific curriculum to improve education. Through this lens, he argues that value-added models are looking in the wrong place.

There are actually two separate criticisms here. One is that “current modes of testing cannot identify which student achievements and progress are the result of school instruction.” In other words, value-added models cannot separate teachers from other educational inputs as they claim.

The second is that the emphasis on value-added models implies that the best way to improve educational quality is by improving the caliber of teachers. Hirsch’s body of work fervently argues that a more cohesive curriculum and teacher environment would do far more than removing the worst teachers in the current system.

Policymakers must wrestle with both questions, but in studying the value-added research I can mainly speak to the first. Hirsch has previously argued that testing reading comprehension is particularly difficult because student understanding is not independent of the content of the material. Students may have trouble parsing an essay written on an unfamiliar subject, but have no trouble with an equally complex text on a topic about which they have more knowledge. What reading tests purport to measure, such as a general ability to find the main idea, Hirsch calls a “nonexistent general skill.”

While I do not agree this critique makes value-added measures invalid, it is certainly true that some subjects are easier to test than others. Imagine, for instance, that Hirsch’s critique of reading tests is half right. What if student test scores are affected by their prior knowledge of the subject matter of the tested passage, as he argues, but students also really do have a separate “critical reading” ability that they apply when reading about any subject?

Further, suppose that English teachers can help students improve their general “critical reading” skills, but whether their students happen to be assigned passages on familiar subjects on tests is random. If only one of these factors is even partly under the teacher’s control, how will that manifest in teachers’ value-added scores?

What will happen is that better teachers would, on average, still have higher value-added scores, but our estimates of teacher effects would be smaller and noisier than in subjects where the teacher can affect all relevant student skills. And indeed, studies with student math and reading scores regularly find stronger estimates for teacher effects on student scores in math.

Keeping in mind Hirsch’s critique, it might be that a perfect system would use value-added measures in teacher evaluations for some subjects but not others, or at least use different relative weights on value-added versus other measures of teacher effectiveness. However, I do not agree that the current system renders value-added measures useless; under the present regime, the teacher value-added model is still a useful tool for educational policy that we would be worse off for ignoring entirely.

Click here to sign up for the Daily Economy weekly digest!

   

Patrick Coate, PhD

Get notified of new articles from Patrick Coate, PhD and AIER.