The story does not specify just how the consultants arrived at these scores, or whether the consultants had first-hand experience with the schools prior to their SIG designations. The story also does not make clear whether we can expect any discernible change in standardized test scores from the improvements noted by the consultants, and if so how much change.
I have a friend who teaches AP courses at a Tacoma high school. His classes are notorious for being quite demanding, and the grade a student earns in the course is a pretty good predictor of how that student will fare on the AP exam. A or B in the course, pass the exam, probably with a pretty good score. D in the course, fail the exam. C in the course, it could go either way. He's not 100% in his predictions, but strong patterns have recurred these many years he's taught.
It seems reasonable to me that if we are going to put a lot of weight on standardized test scores, we should be able to connect the school improvement scores to increases in test scores. If a consultant gives a school decent, if modest, score increases, but test scores remain flat or drop, then we have to wonder about the nature and quality of the reforms or the scoring of them.
I hope that when Giaudrone's, Stewart's and Jason Lee's test scores come back, we remember to compare them to this report card they just got. We are, after all, data-driven in our program design, assessment...in everything.
If we don't compare the reform scores to the test scores, then how seriously are we really taking the reform program, the tests, and the connection between the two?
No comments:
Post a Comment