I suppose it's easier for news reports and partisans to latch on to something like 'teachers want to protect seniority' and skip over the complications of such a situation.
Today, news reports said the Tacoma district was willing to retain seniority. I hear it as the district agreed to retain seniority in layoffs. But there's more to it. One of the other aspects of seniority has to do with who gets to decide which teachers can have which assignments. And this part the district wants to eliminate, I'm told. This could create a situation in which principals can freely (and therefore sometimes arbitrarily) move people where s/he wants.
Consider this scenario. The district seems to want to end seniority completely in cases of school to school transfer/displacement. It then becomes possible that a principal could move to another building and entice a few of his/her favorite teachers to come along, promising them choice assignments, and displacing those who already hold those jobs. If such 'flexibility' (as the district calls it) were in place you could end with something like this:
A veteran teacher retires from a long-held spot. The principal brings in a teacher from another school, enticing him with a promise to make him department chair and give him a plum class assignment, jumping him over longer serving teachers in the department. The department meets and collectively decides that they prefer not to have the new member get thusly ushered in to the choice assignment, without any experience.
Seniority rights are part of the support for the teachers' claim. But let's not get hung up on the notion of seniority or flexibility.
The real issue is about governance of the school and its programs. Let's face it, teachers are not always completely confident in principals' decision-making and judgment. And such authority is a lot to vest in one person.
So seniority may be somewhat rigid and mechanistic (so let's talk about adjustment to that), but it has been designed to safeguard against scenarios like the one above--which did happen.
Construing 'seniority' as a nothing more than a scam to protect teachers, and therein substantiating demands for 'flexibility' is unhelpful in that it swings the pendulum just as far in the opposite direction.
Let's not scrap one institutional arrangement in favor of another equally problematic institutional arrangement. Let's make sensible choices about adjustments....Let's create systems that actually work.
What's middle school like...after coming back from remote learning? Well let me tell you...it's different. (If you were reading this with standardized test eyes, that's the thesis statement. Just didn't want you to miss it.) The rest of the blog will explain "different."
Wednesday, August 31, 2011
Tuesday, August 30, 2011
Getting worse?
The rumor mill in Tacoma has it that the district administration is pushing for significant change (I've heard the word "elimination") of the seniority clauses in the contract with teachers.
Rumors being what they are (and mills being what they are), I want to take care not to be too brazen with this. I do think it's safe to say that if the district is in fact standing on something so significant, this would have serious ramifications for schools and education.
The apparent logic behind such a change is to make evaluation and removal of "bad teachers" easier, and perhaps, to ultimately maneuver out older, more expensive teachers.
There are forces (read, local advocacy groups) in motion that seem favorable, at least implicitly, to both of these circumstances.
Just what sort of evaluation process would be implemented? (And, by the way, such a change would make every year a free for all. Evaluations that lead to removal could be delivered at the end of any and every school year.)
And who would execute the process? Would it be simply deterministic? (Test score performance increases of a certain size guarantee a teacher's spot next year? Some other mechanistic measure?)
Or would a person or panel give input? Who? Based on what? Such input could be really effective...in a high trust environment. Tacoma, unfortunately, is becoming a lower trust environment every day.
The seniority system (like tenure for university professors) is ripe for review and adjustment, no doubt. Swinging hard to the other side, 'blowing up' the current institutional arrangements without a robust replacement that all the stakeholders have bought into (sorry for the Ed-speak), isn't a good plan, though.
I'm sure somehow this is what's best for kids...I just haven't figured it out yet.
Rumors being what they are (and mills being what they are), I want to take care not to be too brazen with this. I do think it's safe to say that if the district is in fact standing on something so significant, this would have serious ramifications for schools and education.
The apparent logic behind such a change is to make evaluation and removal of "bad teachers" easier, and perhaps, to ultimately maneuver out older, more expensive teachers.
There are forces (read, local advocacy groups) in motion that seem favorable, at least implicitly, to both of these circumstances.
Just what sort of evaluation process would be implemented? (And, by the way, such a change would make every year a free for all. Evaluations that lead to removal could be delivered at the end of any and every school year.)
And who would execute the process? Would it be simply deterministic? (Test score performance increases of a certain size guarantee a teacher's spot next year? Some other mechanistic measure?)
Or would a person or panel give input? Who? Based on what? Such input could be really effective...in a high trust environment. Tacoma, unfortunately, is becoming a lower trust environment every day.
The seniority system (like tenure for university professors) is ripe for review and adjustment, no doubt. Swinging hard to the other side, 'blowing up' the current institutional arrangements without a robust replacement that all the stakeholders have bought into (sorry for the Ed-speak), isn't a good plan, though.
I'm sure somehow this is what's best for kids...I just haven't figured it out yet.
Some Achievement Gap Data
Alan Krueger, a labor economist and president Obama's nominee to chair the Council of Economic Advisors, really is an education expert...he's done some significant studies of various issues, like class size differentials and their affects on the achievement gap.
I know it's a difficult budget climate, but that doesn't change findings like these:
Smaller classes (13-17, instead of 22-25), from K to 3, improved black student test scores 7 to 10 percentile points, which was far better than white student improvement.
Smaller K to 3 classes ended up in more black students taking college entrance exams. The black-white college exam gap decreased by 60 percent following the smaller class size experience.
I'm not making a policy suggestion...I'm just presenting the data.
I know it's a difficult budget climate, but that doesn't change findings like these:
Smaller classes (13-17, instead of 22-25), from K to 3, improved black student test scores 7 to 10 percentile points, which was far better than white student improvement.
Smaller K to 3 classes ended up in more black students taking college entrance exams. The black-white college exam gap decreased by 60 percent following the smaller class size experience.
I'm not making a policy suggestion...I'm just presenting the data.
Sunday, August 28, 2011
Some Issues in Tacoma Schools OTHER THAN the Contract Negotiations
During the recent school board primary campaign, one candidate spoke of courage, the need for a school board member with the courage to lead. If it’s courage we need, I hope the current board, the next board, the union, advocacy groups...all of us will have enough to confront the issues before us. But not with a well-oiled agenda sharpened merely on conviction and preference. Rather, I hope we all consider questions like these below, first by examining our own thinking as we evaluate our reasoning and the evidence we use to support it, then by entering a discussion in which we listen as generously as we talk.
Achievement Gap
- How do we prioritize all the suggestions the consultant’s report makes? What evidence suggests that cultural training supports student achievement? The district has undertaken several cultural awareness initiatives before, why haven’t those generated more success?
- What is the best evidence about causes of and solutions to the achievement gap? The consultant’s report contains the following two sentences--about a page apart.
The Advisory Committee found that the achievement gap for African American students is caused primarily by:
Inequitable distribution of skilled and experienced teachers (p. 13)
and
The degree to which quality teachers are available to African American students in Tacoma schools could not be determined with the available information (p. 15)
How do we make sense of the “primary cause” of the achievement gap?
- Why has there been so much less mention of the Hispanic achievement gap?
- How does adopting the Common Core affect our pursuit of closing the achievement gap? How does cultural competency square with the Common Core?
- How does ‘innovation’ in school arrangements--for the sake of closing the achievement gap--affect our commitment to the comprehensive high school? Do speciality schools like SAMI and SOTA concentrate effective students in one place by drawing them away from their ‘regular’ high school, thereby depleting that school community’s breadth of students?
Balancing Objectives
- The Tacoma schools have the responsibility to get students to standard, and get them college ready, and close the achievement gap. Sometimes these objectives are at odds. Getting a nearly-at-standard student to standard is much different from making them college ready. How shall we reconcile these sometimes competing responsibilities?
Teacher Evaluation
- What are the components of a robust and supple teacher evaluation method? Are there any ‘predictive’ measures of a teacher’s quality? Should the district use such measures?
- What connection can we verify between student test scores and teacher effectiveness? How confidently can we use test scores to evaluate teachers?
By way of summarizing these points, Vibrant Schools Tacoma’s agenda reflects the general trends animating the current discussion. The advocacy group calls for a teacher evaluation protocol (student test scores constituting a significant portion) and increased cultural competency training to close the achievement gap.
But proponents of such programs offer little evidence that either cultural training or more elaborate teacher evaluations generate higher student achievement. Indeed, VST’s web site calls the reforms “common sense,” and offers up the BERC report, whose only discussion of any research is the listing of various effective teaching characteristics (the STAR protocol, etc.).
VST also provides the inaptly named “Will Seniority-Based Layoffs Undermine School Improvement Efforts in Washington State?” This document is merely a description of how many teachers would be affected by the different School Improvement Grant programs--transformation, turnaround and closure. It contains no analysis or projection of educational effects from the programs.
By contrast, the Economic Policy Institute has presented a thoroughly researched briefing paper on the concerns over test-based teacher evaluations. They point out various technical and statistical difficulties of such programs, to be sure. The more serious problem, however, is the slew of unintended negative consequences, like a narrowed curriculum, decreased teacher collaboration and disincentive to work with needier students that follow. The authors counsel caution when using score-based evaluations.
In short, there is no magic bullet out there to fix education. It takes steady and consistent building of trusting relationships among the community, families, school administration, school staff and students. This relationship-building could follow from a serious conversation addressing the kinds of questions above.
Those kinds of conversations seem less likely every day.
Better summary than I can give
Here you go...don't bother with this whole blog. This article pretty well summarizes a decent portion of what I've been trying to say....
Thursday, August 25, 2011
Working relationships?
School Board members fight in Everett.
The article author wonders, "With tempers flaring and emotions on overdrive, just being in the same room again will be uncomfortable. How can five people, forced to spend hours together by the happenstance of being elected, rebuild a working relationship that devolved into an act of violence?"
Good question.
The TEA-Superintendent interaction in Tacoma isn't a model, that's for sure.
The article author wonders, "With tempers flaring and emotions on overdrive, just being in the same room again will be uncomfortable. How can five people, forced to spend hours together by the happenstance of being elected, rebuild a working relationship that devolved into an act of violence?"
Good question.
The TEA-Superintendent interaction in Tacoma isn't a model, that's for sure.
Wednesday, August 24, 2011
Follow the incentives....
Value-Added Models (VAM) of teacher evaluation are touted, by some, as a great instrument for assessing teachers and how they contribute to growth in student performance. Michelle Rhee instituted such a program in Washington, DC during her tenure as chancellor/superintendent/whatever-it-was. A political organization has formed in Tacoma and is now pushing Tacoma schools to adopt some sort of VAM.
There are many great things about such models. We've used these kinds of tests for years in my school district. We test students (in our case, math and reading only) in September, January and May. We can look at beginning-of-year performance and compare it to end-of-year in order to see how much students 'grew' in that area. I like getting the instant (well, overnight) feedback, and showing students a chart of their growth patterns over several years.
Proponents assure us they can isolate the teacher's contribution to this growth (as opposed to other factors beyond school), and maybe they can.
What isn't clear is just how we ought to compile the scores. Will a teacher's class average growth determine the value added, or will the raw number of gainers and decliners be tallied, irrespective of the quantities of movement in either direction? Or something else?
More importantly, nobody seems to have asked how this affects high-achieving schools and their teachers. The VAM guidelines linked above suggest that teachers who generate higher than expected growth be assessed higher than those who generate expected or below expected growth. That makes sense.
Problem is the growth expectation patterns are based upon where you are initially. If a student starts in the 98th percentile, expected growth will be very small, especially compared to a student in the 25th percentile.
Ostensibly, this difference is corrected when you say 'achieves higher than expected' growth. But every teacher knows that really good and capable students (say, those with percentile ranks above 90) often "wobble"--successive test scores bounce around a high mark, but from one test to the next may not show improvement. I don't know if I'd count it as less than expected performance when a student tests at 98th percentile in fall and slips to 96th in spring. That student is operating at a very high level, and those few points drop would consist of a substantial element of predictable statistical variation.
On the other hand, students who start low have a lot of room for rapid growth. Of course, the expected growth is higher, and so is the risk that the student might remain disengaged from schooling and the testing, and thereby show (much) lower than expected growth.
My point is that a different set of prospects (and risks) attend the VAM programs in the different settings.
In my school, the 8th graders typically come in reading--as a group--somewhere in the 9th grade level, maybe early 10th grade. We typically send them on a little bit more than a year ahead of where they came in.
But what would happen if we got only 8 months equivalent of growth in our 9 months in school? Would we be deemed 'less than expected'? I would suppose so, even though 8/9ths of expected growth when they're already nearly 2 years ahead may not be such a bad thing.
Well, in any case, I do know this...under a VAM I want students to come in lower than their actual ability on their fall test. Lots of 'easier' growth for spring that way.
I'm just sayin'...follow the incentives.
There are many great things about such models. We've used these kinds of tests for years in my school district. We test students (in our case, math and reading only) in September, January and May. We can look at beginning-of-year performance and compare it to end-of-year in order to see how much students 'grew' in that area. I like getting the instant (well, overnight) feedback, and showing students a chart of their growth patterns over several years.
Proponents assure us they can isolate the teacher's contribution to this growth (as opposed to other factors beyond school), and maybe they can.
What isn't clear is just how we ought to compile the scores. Will a teacher's class average growth determine the value added, or will the raw number of gainers and decliners be tallied, irrespective of the quantities of movement in either direction? Or something else?
More importantly, nobody seems to have asked how this affects high-achieving schools and their teachers. The VAM guidelines linked above suggest that teachers who generate higher than expected growth be assessed higher than those who generate expected or below expected growth. That makes sense.
Problem is the growth expectation patterns are based upon where you are initially. If a student starts in the 98th percentile, expected growth will be very small, especially compared to a student in the 25th percentile.
Ostensibly, this difference is corrected when you say 'achieves higher than expected' growth. But every teacher knows that really good and capable students (say, those with percentile ranks above 90) often "wobble"--successive test scores bounce around a high mark, but from one test to the next may not show improvement. I don't know if I'd count it as less than expected performance when a student tests at 98th percentile in fall and slips to 96th in spring. That student is operating at a very high level, and those few points drop would consist of a substantial element of predictable statistical variation.
On the other hand, students who start low have a lot of room for rapid growth. Of course, the expected growth is higher, and so is the risk that the student might remain disengaged from schooling and the testing, and thereby show (much) lower than expected growth.
My point is that a different set of prospects (and risks) attend the VAM programs in the different settings.
In my school, the 8th graders typically come in reading--as a group--somewhere in the 9th grade level, maybe early 10th grade. We typically send them on a little bit more than a year ahead of where they came in.
But what would happen if we got only 8 months equivalent of growth in our 9 months in school? Would we be deemed 'less than expected'? I would suppose so, even though 8/9ths of expected growth when they're already nearly 2 years ahead may not be such a bad thing.
Well, in any case, I do know this...under a VAM I want students to come in lower than their actual ability on their fall test. Lots of 'easier' growth for spring that way.
I'm just sayin'...follow the incentives.
Will Tacoma have school on Sept. 1?
My father, who spent most of his career in PR, told me many times that reality isn't as important or powerful as what people perceive as reality.
I'm beginning to perceive that even if school starts on time in Tacoma, it will do so under a cloud of contention. It's contract negotiation time and the Tacoma Education Association and the administration are sniping more, not less, as the deadline approaches.
Perception management has been awkward on both sides. First, in a message to members, the TEA spoke sharply of the administration's unwillingness to bargain. The administration said little. Finally, the administration posted a letter saying the union hadn't been as available for bargaining as they should have been. Then, today, the superintendent expressed his 'frustration' with the union's late addition of new issues into the discussion.
Read the public comments on the superintendent article and you'll see how inflamed passions are by all this. So much for the 'across the table' model of bargaining...it too readily ends up adversarial, which encourages both parties to think of the process in a zero-sum way, and thereby focus on what they think they must win. Heels get dug in, anger rises.
And if there is some sort of labor action...? Trust corrodes further, and organizational capacity weakens more. Doesn't look good...if these general perceptions are at all accurate, that is.
I'm beginning to perceive that even if school starts on time in Tacoma, it will do so under a cloud of contention. It's contract negotiation time and the Tacoma Education Association and the administration are sniping more, not less, as the deadline approaches.
Perception management has been awkward on both sides. First, in a message to members, the TEA spoke sharply of the administration's unwillingness to bargain. The administration said little. Finally, the administration posted a letter saying the union hadn't been as available for bargaining as they should have been. Then, today, the superintendent expressed his 'frustration' with the union's late addition of new issues into the discussion.
Read the public comments on the superintendent article and you'll see how inflamed passions are by all this. So much for the 'across the table' model of bargaining...it too readily ends up adversarial, which encourages both parties to think of the process in a zero-sum way, and thereby focus on what they think they must win. Heels get dug in, anger rises.
And if there is some sort of labor action...? Trust corrodes further, and organizational capacity weakens more. Doesn't look good...if these general perceptions are at all accurate, that is.
A 1931 Standardized Test
These kinds of things flash around the web periodically. In this case, it's a 1931 8th grade test from West Virginia.
A Washington Post education blogger posted it with the teaser, "you will probably flunk." And so continues the rich tradition of the implicit claims about how much harder, more demanding and generally better schools were 'back in the day.' And this time it's from a fairly liberal (generous and leftish) supporter of schools. It seems such a foregone conclusion that things were far better back in some bygone era (any bygone era) that we all believe it without even realizing we believe it.
Indeed, there are many interesting and good things about the test. The breadth of subjects is wider than we test for today. The social sciences are represented by separate tests for geography, civics and history. Spelling and penmanship get their due alongside reading and English, which is a bit of grammar.
And the social studies questions are wonderfully engaging open-ended questions demanding explanatory answers. "Why are the textile mills disappearing from New England?" "Explain the part played by agricultural machinery in national development?"
That's all well and good. But there are also some interesting (read, surprising or deficient) things about the test.
Take the reading exam. The test asks primarily recall questions about works the student has read (or was supposed to read). Students do not need to read on the test. They need to have read and now recall things like who wrote what piece or passage. This is partly a test of memory then. Further, we all know that some students can give an effective "short report" on books (one of the questions calls for this) they haven't actually read.
Or look at the arithmetic test. The highest math skill tested is fractions and percentages. Fractions and percentages are the beginning of the 8th grade math year now. Today, if fractions and percentages were as high as you have gone, you would take the math remediation class in 8th grade.
Finally, we get no detail about just how students fared on the exam. Did most pass? Did most fail? Was it an even split? Further, what percentage of 14-year-olds even took the test? Did some racial, socio-economics, or geographic groups finish 8th grade--and take the test--in different proportions?
And then look at the exam again, especially the social studies. "Connect the person with the thing he is responsible for." (By the way, Amundsen is misspelled in this list.)
I would think that it would be fairly possible for a student to connect Wilson to the 14 Points and Susan B. Anthony to Women's Suffrage and really know nothing about those people or the thing for which "he" was responsible.
In other words, they actually had the same problem with standardized tests back then that we have now. They weren't necessarily testing the things they wanted to test. Tests are invariably like that.
At least one thing does look more serious...the stakes. We talk of high-stakes testing today. Look what's at stake on this 1931 test.
"These [test] grades do (or do not) entitle you to an Elementary Diploma which admits you to any High School in West Virginia."
Now those are some high stakes. And to whom is that sentence addressed? "Dear Pupil," that short letter starts. The stakes, consequences, incentives, etc., are on the student.
That is different. And, dare I say it, better.
A Washington Post education blogger posted it with the teaser, "you will probably flunk." And so continues the rich tradition of the implicit claims about how much harder, more demanding and generally better schools were 'back in the day.' And this time it's from a fairly liberal (generous and leftish) supporter of schools. It seems such a foregone conclusion that things were far better back in some bygone era (any bygone era) that we all believe it without even realizing we believe it.
Indeed, there are many interesting and good things about the test. The breadth of subjects is wider than we test for today. The social sciences are represented by separate tests for geography, civics and history. Spelling and penmanship get their due alongside reading and English, which is a bit of grammar.
And the social studies questions are wonderfully engaging open-ended questions demanding explanatory answers. "Why are the textile mills disappearing from New England?" "Explain the part played by agricultural machinery in national development?"
That's all well and good. But there are also some interesting (read, surprising or deficient) things about the test.
Take the reading exam. The test asks primarily recall questions about works the student has read (or was supposed to read). Students do not need to read on the test. They need to have read and now recall things like who wrote what piece or passage. This is partly a test of memory then. Further, we all know that some students can give an effective "short report" on books (one of the questions calls for this) they haven't actually read.
Or look at the arithmetic test. The highest math skill tested is fractions and percentages. Fractions and percentages are the beginning of the 8th grade math year now. Today, if fractions and percentages were as high as you have gone, you would take the math remediation class in 8th grade.
Finally, we get no detail about just how students fared on the exam. Did most pass? Did most fail? Was it an even split? Further, what percentage of 14-year-olds even took the test? Did some racial, socio-economics, or geographic groups finish 8th grade--and take the test--in different proportions?
And then look at the exam again, especially the social studies. "Connect the person with the thing he is responsible for." (By the way, Amundsen is misspelled in this list.)
I would think that it would be fairly possible for a student to connect Wilson to the 14 Points and Susan B. Anthony to Women's Suffrage and really know nothing about those people or the thing for which "he" was responsible.
In other words, they actually had the same problem with standardized tests back then that we have now. They weren't necessarily testing the things they wanted to test. Tests are invariably like that.
At least one thing does look more serious...the stakes. We talk of high-stakes testing today. Look what's at stake on this 1931 test.
"These [test] grades do (or do not) entitle you to an Elementary Diploma which admits you to any High School in West Virginia."
Now those are some high stakes. And to whom is that sentence addressed? "Dear Pupil," that short letter starts. The stakes, consequences, incentives, etc., are on the student.
That is different. And, dare I say it, better.
Tuesday, August 23, 2011
What does 'standardized' mean?
Since it plays such a big role in education these days, I found myself pondering just what the word standardized means, and just what it implies about the education we're trying to create these days.
Standard means many things. The two definitions applicable to this particular context are
Standard means many things. The two definitions applicable to this particular context are
A) something established by authority, custom, or general consent as a model or example
B) something set up and established by authority as a rule for the measure of quantity, weight, extent, or quality
To summarize, when we talk about standardized, standard means the point we agree to call acceptable, as in "students need to meet standard"--a specific score on a performance test--in various school subjects. It can also connote a certain level of quality. "We set high standards"--levels of achievement that we expect.
Obviously, we could meet standard and achieve at very low standards....It depends on where the bar is set.
--ize makes the noun into a verb. --d makes it past tense.
When we use the word standardized, then, we are using a passive voice construction to say that something has been fashioned into a form that has been connected or made into a standard.
Questions abound from this. What has been standardized...the curriculum in order to make a largely objective test easier to administer, or the test in relationship to the curriculum? Have learning outcomes been routinized along the way...a by-product of standardization. Do students end up more standardized, then? Who does the standardizing? Who are the experts or authorities, in other words, who set the standards? Do we all generally consent to these standards?
I'm not sure I'm fond of the answers to these questions, so maybe I shouldn't ask.
Monday, August 22, 2011
The pitfalls of self-interest
I appreciate self-reflection. I encourage students to it. I try to do it. You know..."unexamined life" and all that.
But it matters what material we use for that reflection. That's why I'm a bit concerned about the latest advice I read in EdWeek--the closest thing I know to the journal for the teaching profession.
In this article on 5 questions that will improve your teaching, we are given a list of queries that reify the student-centered approach to teaching. And there's the problem....Student-centeredness can (and has, in this case) get a little off balance. To put it briefly--and perhaps mildly--we teachers are encouraged to be student-centered, and students, already inclined this way anyway, are encouraged to be self-centered.
It requires a careful balancing act for a teacher to be student-centered and require that students be other-centered. This balancing is not strengthened by self-reflection questions like "is what I am doing--in the class room--going to connect to the students' self-interest?" (Question #2)
Problems arise here...on many levels.
First, it assumes that education and students' self-interest can be aligned. Perhaps they can, in some degree or in a particular moment or situation. But education has an inescapable element of the long-term in it. You keep doing math problems or conjugating Spanish verbs because it's somehow good for you in some vague future. But children's--especially teenagers'--apprehension of the future consequence of current behavior is notoriously bad. The part of the brain that processes that kind of long-term abstraction doesn't develop fully until the late teen years.
Second, an effort to connect to students' self-interest risks elevating their fancy above other values. My teenage son is anxious for school to start again...for "the social activity, not the academics," he notes. It would be in his interest--or at least his desire--if all his academic work could somehow be rendered by texting, and delivered somehow by way of iTunes.
Third, observing one and two reveals the danger of confusing students' interests and desires. Education has always been characterized by the strain inherent in the idea that adults understand a young person's long-term interests better than the young person. Why else would I have been compelled to endure so much math? As students get older they come to assert their own interests and desires, thereby making the teenage years quite frustrating for parent and child alike.
Ultimately, the youngster's burgeoning expression of his/her own desires (which still tend to be short-run) collide with the adult (parent, teacher, "society") perception of the longer term requirements--not yet apprehended by the youngster, or apprehended differently by youngsters and adults. What else would be the source of the long-standing, never-resolved plaint, "Why do I have to do this math? I'll never use it." Has an adult ever answered this in a way that a youngster embraced?
But we're encouraged now to think about students' self-interest. Not just their interest, but their self-interest. Ponder the difference. If nothing else, the adults' pursuit of the students' self-interest makes nonsense of the completely understandable but ultimately unsatisfactory parental line, "It's for your own good."
I understand connecting to students' interests. I understand connecting to their desires. Some of the time. I also understand that their self-interests are just that...SELF-interest. And I understand better than they do how certain reading and writing exercises will be helpful for some or most of them.
I also understand that if they knew some of the things I know about certain likelihoods in their lives, they might render their self-interests a little differently.
If this tension ever goes away, it's because we won't be engaged in anything called education anymore.
Addendum--
Just after posting this, I read an article about students dropping out in NY. One way by which students can drop out is for the student, parents, teachers, counselors and principal to meet together, at which time the parents can "sign out" the student.
The article's author wrote this line, "School administrators and staff do their best to talk them out of this because they know it can have long term effects on the teenager" (my emphasis).
Does the teenager NOT know this? Does the teenager know it and not care? Know and choose in favor of the short-run? Or, possibly, are the school staff wrong?
Addendum--
Just after posting this, I read an article about students dropping out in NY. One way by which students can drop out is for the student, parents, teachers, counselors and principal to meet together, at which time the parents can "sign out" the student.
The article's author wrote this line, "School administrators and staff do their best to talk them out of this because they know it can have long term effects on the teenager" (my emphasis).
Does the teenager NOT know this? Does the teenager know it and not care? Know and choose in favor of the short-run? Or, possibly, are the school staff wrong?
Wednesday, August 17, 2011
More data...always a good thing, right?
Interesting story in The News Tribune about the SIG (School Improvement Grant) schools--all of them middle schools--in Tacoma. Jason Lee and Giaudrone get marks of modest improvement from the consultants who undertook the evaluations. Stewart came in for a fair dose of criticism, and decidedly mixed scores. (Click the report card looking graphic in the "More Photos" box.)
The story does not specify just how the consultants arrived at these scores, or whether the consultants had first-hand experience with the schools prior to their SIG designations. The story also does not make clear whether we can expect any discernible change in standardized test scores from the improvements noted by the consultants, and if so how much change.
I have a friend who teaches AP courses at a Tacoma high school. His classes are notorious for being quite demanding, and the grade a student earns in the course is a pretty good predictor of how that student will fare on the AP exam. A or B in the course, pass the exam, probably with a pretty good score. D in the course, fail the exam. C in the course, it could go either way. He's not 100% in his predictions, but strong patterns have recurred these many years he's taught.
It seems reasonable to me that if we are going to put a lot of weight on standardized test scores, we should be able to connect the school improvement scores to increases in test scores. If a consultant gives a school decent, if modest, score increases, but test scores remain flat or drop, then we have to wonder about the nature and quality of the reforms or the scoring of them.
I hope that when Giaudrone's, Stewart's and Jason Lee's test scores come back, we remember to compare them to this report card they just got. We are, after all, data-driven in our program design, assessment...in everything.
If we don't compare the reform scores to the test scores, then how seriously are we really taking the reform program, the tests, and the connection between the two?
Tuesday, August 16, 2011
Congratulations
Scott Heinze and Dexter Gordon are moving on to the general election for Tacoma School Board seat #3. It has been an interesting and enlightening two months. I'm sure the next several posts will include some of my reflections on the process.
Wednesday, August 10, 2011
Standardized tests are not so standard across states
It probably didn't take much of a study to figure that out, but a study was done nonetheless.
The same problem may arise even within states...like ours, for instance.
And just what are we testing on these standardized tests? NAEP (National Assessment of Educational Progress) results for civics show that we're doing badly in that subject as well. But I'm not sure these kind of test items are proof of that.
At least if we all agree on a common core, and we develop some sort of common assessment of that core, and we apprehend the strengths and weaknesses of both, we can talk about students and their performance in something of a common language. That's a good start.
The same problem may arise even within states...like ours, for instance.
And just what are we testing on these standardized tests? NAEP (National Assessment of Educational Progress) results for civics show that we're doing badly in that subject as well. But I'm not sure these kind of test items are proof of that.
The common core, and common assessment instruments that follow from it, might help address some of these problems.
At least if we all agree on a common core, and we develop some sort of common assessment of that core, and we apprehend the strengths and weaknesses of both, we can talk about students and their performance in something of a common language. That's a good start.
Subscribe to:
Posts (Atom)