So, my district's high school and middle school staffs were getting a training on the Common Core the other day. One feature, I guess you'd say, of the new standards is that they are "combed back" (that's the language everyone uses) from college to Kindergarten. This way we can be sure that we're taking all the necessary steps for college readiness, starting right away.
Our trainer says to us, "The Common Core standards are combed down from college level, instead of built upwards as in No Child Left Behind. Common Core starts with what it takes to be college ready, then they pull that down all the way to Kindergarten....That's best practice."
My head throbbed, ricocheting as it was between rage and depression. "That's best practice." Who in the world said so? Based on what? Not on any "research" or evidence or data, as everything else in the education world must be. It is, of course, based on philosophical assumptions. And the consultants who plump for it, of course, pronounce it best practice, because, well, they're expert in it, and that's why the district hired them.
The philosophical assumption part is the most important. No test results or any other kind of "data" are available with which to evaluate the effectiveness of either the program (standards) or the testing process. Further, with an emphasis on a set of standards and a demanding test (I've seen it...it is indeed much more difficult than Washington's current MSP), but without specific curriculum for and by which the standards writers can be held accountable, schools and teachers are on the line to produce success on a set of standards that are based on an intellectual premise, and one that is far from obviously best practice, because it's not clear there is such a thing.
Common Core's assumptions and the program based on those assumptions fly in the face of a pedagogic philosophy called classical education. Turns out, the arrangement of fact memorizing, argumentative reasoning, and logical explaining that make up the classical trivium may be better suited to the trajectory of a child's brain development. (See Brain Rules and NurtureShock, to name two.)
Classical education--like every other pedagogy and institutional arrangement--has its strengths and weaknesses. My point is not that classical education is the answer, but it has as much to say for it as Common Core's combed down skills does.
So let's stop thinking we trump everything and everybody else when we play the "best practice" card....It's more a joker than an ace.
What's middle school like...after coming back from remote learning? Well let me tell you...it's different. (If you were reading this with standardized test eyes, that's the thesis statement. Just didn't want you to miss it.) The rest of the blog will explain "different."
Monday, March 24, 2014
Wednesday, March 19, 2014
Are All Tests Created Equally?
Listen to a German vocational education teacher telling his American counterparts about project based learning and assessment. 35 seconds in he says, "No multiple choice questions."
I guess he's never met the folks at the Smarter Balanced Assessment Consortium, one of the groups constructing the test that will accompany the Common Core State Standards. That assessment will use a significant amount multiple choice questions.
It's time we ask whether there are limitations to what multiple choice questions can really do. (Spoiler Alert: There are.)
I guess he's never met the folks at the Smarter Balanced Assessment Consortium, one of the groups constructing the test that will accompany the Common Core State Standards. That assessment will use a significant amount multiple choice questions.
It's time we ask whether there are limitations to what multiple choice questions can really do. (Spoiler Alert: There are.)
Tuesday, March 18, 2014
Educational Ruses
Maybe such language ("Ruses") is a bit strong, but it's that frustrating time of year...testing season. I like tests, I think tests are important "events." They focus the mind, motivate the hands (so to speak), raise the academic intensity...when done right.
I don't enjoy tests for what we've made of them. They have come to serve too many purposes--the annual state test measures whether a student is "getting an education" by determining whether s/he is keeping up with the norms for his/her age; serves as an instrument of whether a teacher is functioning adequately; shows whether a whole school is providing education to its students. One test, three different jobs.
The analogy is a bit strained (cancer is not education), but that's like using one annual cancer screening to determine how well the patient is doing with/about cancer, how well the doctor is treating the patient and how effective the hospital is at combating cancer. You can be sure that multiple measures of performance are taken in this case, following multiple tests--over time, rather than at one time--of the patients.
But I digress. The ruse comes in the form of our in-gathering of the so-called bubble students (kids with test scores just below the passing mark) and giving them some test support classes after school. The goal is to squeeze the last 3 or 4 or 5 points out of them, so they can get to passing.
This is primarily for external consumption. Higher pass rate, we'll be heroes--we'll have shown that we're doing a better job educating our students.
Mind you, they won't necessarily be any better readers. In fact, they likely will not be. We'll look good, but they won't be any better at their academic skills. The only benefit I can see for the students is that "passing" would be rewarding, encouraging an uplifting. And those are good.
The point...we should think clearly about the incentives and behaviors that institutional and practical arrangements and institutions generate. If teachers and schools are going to be evaluated and rewarded based on how many students reach the mystical pass/fail bar, we will get practices like this.
I don't enjoy tests for what we've made of them. They have come to serve too many purposes--the annual state test measures whether a student is "getting an education" by determining whether s/he is keeping up with the norms for his/her age; serves as an instrument of whether a teacher is functioning adequately; shows whether a whole school is providing education to its students. One test, three different jobs.
The analogy is a bit strained (cancer is not education), but that's like using one annual cancer screening to determine how well the patient is doing with/about cancer, how well the doctor is treating the patient and how effective the hospital is at combating cancer. You can be sure that multiple measures of performance are taken in this case, following multiple tests--over time, rather than at one time--of the patients.
But I digress. The ruse comes in the form of our in-gathering of the so-called bubble students (kids with test scores just below the passing mark) and giving them some test support classes after school. The goal is to squeeze the last 3 or 4 or 5 points out of them, so they can get to passing.
This is primarily for external consumption. Higher pass rate, we'll be heroes--we'll have shown that we're doing a better job educating our students.
Mind you, they won't necessarily be any better readers. In fact, they likely will not be. We'll look good, but they won't be any better at their academic skills. The only benefit I can see for the students is that "passing" would be rewarding, encouraging an uplifting. And those are good.
The point...we should think clearly about the incentives and behaviors that institutional and practical arrangements and institutions generate. If teachers and schools are going to be evaluated and rewarded based on how many students reach the mystical pass/fail bar, we will get practices like this.
Monday, March 17, 2014
Is sarcasm the last refuge of scoundrelly reading teachers?
One of the new emphases in the Common Core State Standards is close reading of the text in order to be sure to get the author's meaning. This is going to be tested by calling on students to identify meaning accurately, and find the place in the text that substantiates that meaning. The material of the text is the only place where students can find meanings and the supporting evidence.
This passage is from one of the Corrective Reading books (a time-tested reading remediation program by Siegfried Engelman, University of Oregon). Read the paragraph starting with "All right...." and then try to answer the question.
I'm vexed and befuddled. The text says the spy chaser didn't want the "mustard jar" (a strangely anthropomorphized condiment bottle) to waste mustard. What's a remediation teacher to do?
Wednesday, March 12, 2014
Exploding Heads
I've just been in a "training" about the new Smarter Balanced test, which is the assessment following from the Common Core State Standards. My head is about to explode. Either standardized tests are insane, or we are insane to think that such tests accomplish what we think they accomplish.
More to come....
More to come....
Tuesday, March 11, 2014
Try a new hammer?
All three of Tacoma’s SIG [School Improvement Grant] schools [Stewart, Giaudrone and Jason Lee], along with First Creek Middle School and Roosevelt Elementary School, are listed on this year’s state list of low performers, based on test scores. (Whole Story Here.)
Click here for an evaluation of the standardized test scores for those 3 SIG schools.
This all seems to generate a simple question--How much evidence, data, whatever, before someone realizes that the very idea of the improvement grants might be flawed?
Why keep pouring money into programs that have shown they don't solve the problem?
Ok...that was two questions.
Read more here: http://www.thenewstribune.com/2014/03/09/3087228/poor-scores-at-school-draw-state.html#storylink=cpy
Click here for an evaluation of the standardized test scores for those 3 SIG schools.
This all seems to generate a simple question--How much evidence, data, whatever, before someone realizes that the very idea of the improvement grants might be flawed?
Why keep pouring money into programs that have shown they don't solve the problem?
Ok...that was two questions.
Read more here: http://www.thenewstribune.com/2014/03/09/3087228/poor-scores-at-school-draw-state.html#storylink=cpy
Does the SAT meet standard?
Educational testing generates much hand wringing. Most often, it’s over the effectiveness, usefulness or fairness of standardized testing of K-12 students. More recently, the College Board stirred the cauldron of frustration and discontent by announcing major changes to their Scholastic Aptitude Test, or SAT--the traditional bellwether of a student’s college preparedness, and anchor of the entrance application.
David Coleman, president of the College Board, announced changes that are supposed to make the test a more realistic and accurate evaluation both of what a student has already learned, and a prediction of how they’ll do in college.
On the reading, for example, it will no longer be adequate to answer a comprehension question correctly. The tester must also select the passage from the text that supports the answer. This evidence-based reading (and writing) is supposed to show greater understanding.
Of course, it’s not at all clear that it will, because every test design has strengths and weaknesses. In short, there are angles to take, games to play, etc. Each spring, I show my 8th graders how to answer multiple choice Measurement of Student Progress (MSP) reading comprehension questions without even reading the passage. They get about 5 out of 8 right (where we would assume roughly 2 corrects, based on random guessing).
You might call this a “trick of the trade,” the test-taking trade. The new SAT may shut down some of the tricks on the old SAT, but new angles will open up. Clever and interested people will find them, we can be sure.
It’s not hard to imagine, for instance, that if the new SAT answers that demand support from the text offer those in multiple choice form as well, the testing can become an exercise in logic rather than reading. Read the question, figure out the answer from the most sensible “text support” options. Select.
But the problem with the SAT runs even deeper. David Coleman, the new president of the College Board, was purportedly one of the principal authors of the new Common Core State Standards (CCSS), and he has professed his intention to align (match) the SAT to the CCSS.
Sounds sensible. But why an SAT prep course that teaches tricks is less appropriate than 13 years of schooling intentionally designed to match the test remains unclear. After all, it may become that the SAT simply measures certain skills that the CCSS emphasized.
School lessons and the test, in other words, might become something of a closed loop, whereas before the pervasive use of test prep courses, the SAT was supposed to be snapshot of aptitude, somewhat separate and different from school evaluations.
Aligning the SAT to the CCSS might, in other words, ultimately render the SAT less meaningful, not more, by making it like another round of the standardized tests administered in high school. Indeed, if the SAT is going to match the standards, why wouldn’t the standardized test results from high school suffice?
In any case, that discussion may be so much intellectual frippery. A far greater concern is whether both the CCSS and SAT, especially in their intertwining, actually teach and test things that we value. The jury is still out on this, but the evidence is mixed, at best.
The so-called “deep reading” that many laud in the CCSS, and that David Coleman seems intent to test on the SAT, may not make or prove students ready for college any more than prior standards.
An informal survey of a few college faculty showed surprise at the idea of extensive re-readings of texts (as posited by the CCSS). Social science and humanities professors (perhaps except those who specifically employ the scholarly technique of textual analysis) tend to prefer students read widely in a topic to discern the breadth and depth of a “discussion” going on, then find the way into that discussion with the student’s own analytical insights. Testing aptitude and preparation for this would be lengthy and costly (and is why high school grades are doing a better job of predicting college success.)
So in the current educational culture we get the CCSS and the new SAT, and find the effort good. After all, it’s not for lack of effort--as The News Tribune editorialized recently--that Tacoma’s schools aren’t getting better. Too bad the standards don’t allow credit for effort.
Subscribe to:
Posts (Atom)