Thursday, July 24, 2014

Charter School Marketing

Apparently, charter school leaders got together to talk about how to hone the message about what they are and do.

Read it...included is a list of what not to say and what to say instead.
Say Charter community, not sector.  Say Responsive to student needs, not experiments.

They left off lots of other stuff--to say or not to say, I can't decide.

Best Practice
What Works
Research-based Evidence
What's Best for Kids
Teach the Whole Child

I'm sure I've missed some, but you get the point.  Never consider attending a charter school that doesn't do heavy trade in this language!

Tuesday, July 15, 2014

Another installment of my 15 minutes (of fame, that is)

TVW--the public service civic engagement TV of Washington--broadcast my recent book talk at the University of Washington, Tacoma.  Now, you too can watch! 

Friday, July 4, 2014

The Problem with Teachers

Reading the sometimes interesting Intelligence and How to Get It, by Richard Nisbett.  Among many other statistically confirmed findings, teachers with experience add more to a student's learning.  But, he notes (without providing the statistical results in this case) most of the experience boost comes from the first year of teaching.

In other words, the big gain--for students--from teacher experience is in the difference between 0 and 1 year of teacher experience.

"So," Nisbett concludes, "it is definitely worth trying to avoid having your child put in a class with a rookie teacher."

Not, it would be worth figuring out how schools could create support arrangements to help rookie teachers do better.  (I know more than one quite good teacher who almost quit in his/her first year.)

Instead, make sure your own child avoids the bad situation.

Not, when you know you have a rookie teacher, work with your principal--and your student--to address some of the experiential difficulties that will arise.

Instead, do the socially destructive thing--maximize your student's (but not others') prospects by getting out of that class.  Don't forget, not everyone can get out.

Not, contribute to some long-term resolution of this difficulty.  Instead, maximize your private needs and move on.

Every teacher was a rookie teacher once, after all.  So, this is a persistent "problem."

The book went downhill after least for me.  I'm tired of the game we play where we talk about how we're in this (education) collectively, doing what's best for kids, striving for harmony within diversity, to create an environment where every student can learn.  (Have I got enough of the cliches in?)  We talk about it, but in reality, of course, each individual (parent) wants the best for their child, with much less regard for what's good for all the other children.

There's something of a conflict or at least a tradeoff in this.  Most of us don't mind if the other kids don't get the very best, as long as our children do (as far as we perceive).  Oh, it's not expressed so bleakly and bluntly.  No, it's more like an enormous drop-off in attention, interest and concern for anything once one's own child is well-situated.

We all do it, and it makes the job of producing and distributing "education" (widely, to the broader audience) more difficult.

Here's to you, rookie teachers.

Monday, June 30, 2014

Stop saying "best practice"...

...there's no such thing, unless you qualify the claim with the relevant conditions or parameters.

"This is best practice under these conditions...."  Or, "This is best practice for these students...."

It's logically and practically impossible to imagine there's a best practice for every student, given the wide range of ability levels, learning styles, brain development, and more present among any group of 30 students--even 30 in the same grade.

There may be something like a best practice for maximizing aggregate outcomes on some particular measure. In other words, there may be a practice that generates the greatest likelihood of raising something like a standardized test score for the greatest portion of those 30 kids in the class room.  It won't necessarily raise everyone's score, or, even more likely, all 30 individual's score as much as another approach/practice may have raised a particular 1 or 2 or 3 students' scores.

What I'm getting at is another of the assumptions embedded in the standardization process, and especially the testing that accompanies it.  The push toward standardization, measurement and assessment focuses our attention on aggregate outcomes.  We assume that those things the tests measure are all and only what we think is worth a student knowing, understanding or doing.  Further, we assume that collectively aggregating all student scores into a few measures, the improvement of which is the primary objective, is worthwhile.

For example, if I could get 27 students' test scores to go up the greatest possible amount (as if we could know that), but the 3 other students' scores stayed flat, or even dropped, I'd be a hero--90% went up.  But was there some other "practice" (not best for the aggregate, but best for those 3 students) that would have raised those 3 scores?  If raising their scores would have meant trading off a different 3 students' scores, what would be the best practice?  What about trading off 5 other students' scores?

So, we need to talk about "best practice" while acknowledging that we make guesses at tradeoffs among students, while maintaining the objective of maximizing as many of the 30 students' scores as possible.

It may not sound like it, but what I've just raised is a complicated set of trade offs and balances in pursuit of one particular goal--aggregate outcomes.  You can't spend much time in a class room without realizing that some students respond very differently to a teaching practice, "best" or otherwise.  When I think about what we do in my 8th grade English room, I try to create a variety of tasks, activities, assignments, etc., knowing that some of that work will appeal to one part of the class, and another portion of the work will appeal to some other students.  I've found very few things that stimulate, engage, inspire, whatever, ALL the students in the room.  And some students are engaged by very little of what we do.  Striking this balance is the constant endeavor of identifying so-called "best practice" in the first place.

Ultimately, "best" is something of a trope.  We won't really get there, in part because we disagree about what we should be pursuing in the first place.  That complicates the question before we even begin to answer it.

Friday, June 27, 2014

Parents, stay involved in teaching, learning, schools...please.

Lots of teeth have been gnashing over parent involvement in schools (or is it homework, or is it...?).  Marilyn Price-Mitchell does a nice job of evaluating the currently vogue notion that getting involved in your child’s education isn’t as helpful as you think it is.  I only add some variation to her observations.
Keith Robinson and Angel Harris started the recent conflagration, first with their book, Broken Compass, then with opinion pieces in various leading newspapers--or the NYT, at least.   There they opined that parental involvement does not educationally benefit children as much as we all presume.  Their research showed that very few and very specific types of involvement were helpful to academic performance; namely, clear expectations about college attendance, general conversation with children about what they’re doing in school and requesting a particular teacher were the only engagement that proved universally worthwhile.
“What should parents do?  They should set the stage and then leave it,” Robinson and Harris conclude.  Their findings appear reasonable, based as they are on copious data.  But taking a different angle on these issues, their conclusions are less compelling. 
Following the predominant fad of relying on econometrically ordered data, Robinson and Harris show, for instance, that helping your child with homework doesn’t raise her/his standardized test score.  Neither does conferencing with the principal and the teachers.
Undoubtedly, the statistical correlations are clear.  The problem is that raising standardized test scores may not be the only, or even the best, reason to help your child with homework.  Supporting your student’s effort to grasp concepts and practice skills—which may or may not be relevant to the standardized test—are important, too.  
Further, my own research, as well as experience in the class room, call into question whether standardized tests really measure all of what we want youngsters to be learning and doing in school.  To put it bluntly, standardized tests can be gamed, making them more a test of how well the student tests, and less an assessment of how a student is performing in the wide range of academic activities s/he’s undertaking.
Each spring, I show my 8th graders how to answer multiple choice reading comprehension questions on Washington state’s annual test, the Measurement of Student Progress, without even reading the passage.  The class average consistently exceeds 5 correct out of 8 (where we would assume roughly 2 correct, based on random guessing).  Improvement on standardized tests, to be sure, is one measure of performance, but it may be less useful than we think or hope.
As for conferencing with school staff, Robinson and Harris are undoubtedly right—those conferences often contribute little to immediate student improvement, but not for the reasons the authors imply.  Most such meetings are for students who aren’t doing very well.  This “intervention,” in other words, is taken with a non-random sample of students.  And, yes, each individual conference often means little, as they are an inadequate response to underlying academic problems, undertaken with low expectations for success in the immediate term.  
But intervention must start somewhere, and every teacher has been thoroughly discouraged with a student only to see him/her completely transformed two or three years later.  I’ve seen plenty of students whose repeated conferencing with my colleagues and me did little, but as time granted greater maturity to these youngsters, they blossomed into well-functioning students.  Who’s to say that those several frustrating conferences in one grade weren’t important for the later development of that student?  But identifying variables and disaggregating competing causes in an econometric analysis of this effect will be much more complicated and nebulous than the more direct connections Robinson and Harris evaluated.
Ultimately, the admonition to “set the stage and then leave it” is somewhat dubious, mostly because “setting the stage” remains so abstract as to be meaningless.  Just what would stage-setting involve?  When?  More to the point, what is the advice to individual parents when Robinson and Harris’ findings are based on large demographic categorizations? 
Increasing brain science and education research both demonstrate that the stage that needs the most setting is the early years, before school age.  Children who have more verbal interactions with a caring adult in the first 12 months, for instance, show better academic performance later.  This and other early developmental patterns may confound later statistical correlations that Robinson and Harris report.
But causal and correlational difficulties notwithstanding, let’s grant Robinson and Harris their point—that parent involvement may not cause better student performance, or at least higher test scores.  Parent involvement has important benefits, however, beyond the individual effects for parents’ own children, benefits for the school as an organization and a community.
As the regulation of schools moves further up the bureaucratic hierarchy and farther away from local participants—including parents, the legal and institutional requirements get more intensive but less concretely useful in class rooms. This bureaucratization serves to ‘tighten’ the organizational environment, thereby increasing the risk of what sociologist Charles Perrow called normal accidents—predictable system failure arising out of the very complexity of that system.  The increasing governmental management and concomitant decreased local involvement, in other words, makes it more likely that schools as organizations will ‘fail’ to meet their goals.

The antidote to this dysfunction is local engagement, part of which includes parent involvement.  Volunteering at school, or conferencing with the teachers may be part of this, but other forms and patterns of local participation matter just as much or more.  

Robinson and Harris are correct that government programs intended to stimulate parent engagement amount to little, but, again, for different reasons than they posit.  Local activity must arise from the natural, and informal, impulses of the participants, not from a government mandate or program.
What does this local involvement look like?  Story nights--coordinated by parents and teachers, in which students present their work; parent engagement in discussion about curriculum adoption (which requires an administrative culture open to this); volunteerism to support students in need, not just one’s own children; annual student-led conferences with parents and teachers, or anything that causes parents, staff and students to work together are all examples of the collaborative and relational work that makes for more effective functioning of a school as organization.  When this kind of engagement happens, tasks get done, trust increases, and relationships grow better and richer.  
Teaching and learning, after all, are about relationships.  Engaged collaborative parent involvement is a relational tonic for the souls (of teachers, administrators, parents and students) wearied by the bureaucratic reality of school.
Oh yeah, you can get more on this by clicking those links over to the right...The Normal Accident Theory of Education stuff.

Monday, June 23, 2014

This is your school on Reform....Get the picture!

My colleagues and I frequently muse that if people really looked at and understood what’s going on in education, or in their local school, they would never stop muttering in astonished disbelief.  The seeming “good” news that Tacoma's Lincoln High School was able to lift itself off the state’s watch list of low-performing schools is a case in point.

Just two months ago, Lincoln was heralded as the 22nd most challenging high school in Washington.  (Note the district’s pride.)  

“Challenging” was awarded by Washington Post education columnist Jay Mathews, and was based on the ratio of graduates to AP course enrollments.  This sits nicely with Tacoma’s conviction that more AP courses, into which students will be automatically enrolled if their performance meets certain markers, is the “best practice” for everything from pushing student performance higher to closing the African-American (and, presumably, Hispanic-American) achievement gap.

This week, though, administrators breathed a sigh of relief when Lincoln dodged the bullet that is the state’s watch list of low-performing schools.  They found enough clerical errors in their own records to get Lincoln off the dreaded list.  It seems that some 13 students who had actually graduated were left on the rolls in such a way that they were counted as dropouts or otherwise not graduating.  Some further 30 students had transferred from Lincoln and should not have counted against the school.  

Good for Lincoln...but not really.  Schools are assigned to the state’s watch list for 3-year rolling graduation rates below 60%.  Before identifying the clerical errors, Lincoln had been at 59.8%.  After fixing the errors, they climbed to 65%.

In other words, the very same school touted as "challenging" because it had a high ratio of graduates to AP enrollments didn’t even graduate two out of three students in the last 3 years.

When Lincoln was awarded its status as most challenging, a colleague observed that one way to get a high ratio of graduates to AP enrollments is to drive down the number of graduates.  At the time, we both thought he was playing farcical with the logical possibilities, but apparently not. 

Unless you're part of the educracy, and therefore understand all of this, you might be surprised.   Don't be, though.  This is part and parcel of the strange game--you might call it the fog of assessment--that goes on when it comes to evaluating student, teacher and school performance by way of few high value data points.

Such is the bureaucratic quagmire that is education today.  We grope, like the blind men describing an elephant, for little bits of information with which we satisfy ourselves that we really have got to the heart of the matter.  

The answer?  Get involved in a school.  Sure, your kid’s, but better yet, a school in another neighborhood.  If you can’t invest that much time, at least visit a school.  Watch a couple of classes, for several days.  What you see just might surprise you, and you certainly will understand education and schools better than any test scores could possibly inform you.