Wednesday, August 24, 2011

Follow the incentives....

Value-Added Models (VAM) of teacher evaluation are touted, by some, as a great instrument for assessing teachers and how they contribute to growth in student performance.  Michelle Rhee instituted such a program in Washington, DC during her tenure as chancellor/superintendent/whatever-it-was.  A political organization has formed in Tacoma and is now pushing Tacoma schools to adopt some sort of VAM.

There are many great things about such models.  We've used these kinds of tests for years in my school district.  We test students (in our case, math and reading only) in September, January and May.  We can look at beginning-of-year performance and compare it to end-of-year in order to see how much students 'grew' in that area.  I like getting the instant (well, overnight) feedback, and showing students a chart of their growth patterns over several years.

Proponents assure us they can isolate the teacher's contribution to this growth (as opposed to other factors beyond school), and maybe they can.

What isn't clear is just how we ought to compile the scores.  Will a teacher's class average growth determine the value added, or will the raw number of gainers and decliners be tallied, irrespective of the quantities of movement in either direction?  Or something else?

More importantly, nobody seems to have asked how this affects high-achieving schools and their teachers.  The VAM guidelines linked above suggest that teachers who generate higher than expected growth be assessed higher than those who generate expected or below expected growth.  That makes sense.

Problem is the growth expectation patterns are based upon where you are initially.  If a student starts in the 98th percentile, expected growth will be very small, especially compared to a student in the 25th percentile.

Ostensibly, this difference is corrected when you say 'achieves higher than expected' growth.  But every teacher knows that really good and capable students (say, those with percentile ranks above 90) often "wobble"--successive test scores bounce around a high mark, but from one test to the next may not show improvement.  I don't know if I'd count it as less than expected performance when a student tests at 98th percentile in fall and slips to 96th in spring.  That student is operating at a very high level, and those few points drop would consist of a substantial element of predictable statistical variation.

On the other hand, students who start low have a lot of room for rapid growth.  Of course, the expected growth is higher, and so is the risk that the student might remain disengaged from schooling and the testing, and thereby show (much) lower than expected growth.

My point is that a different set of prospects (and risks) attend the VAM programs in the different settings.

In my school, the 8th graders typically come in reading--as a group--somewhere in the 9th grade level, maybe early 10th grade.  We typically send them on a little bit more than a year ahead of where they came in.

But what would happen if we got only 8 months equivalent of growth in our 9 months in school?  Would we be deemed 'less than expected'?  I would suppose so, even though 8/9ths of expected growth when they're already nearly 2 years ahead may not be such a bad thing.

Well, in any case, I do know this...under a VAM I want students to come in lower than their actual ability on their fall test.  Lots of 'easier' growth for spring that way.

I'm just sayin'...follow the incentives.

No comments: