Clippings: School testing results can mislead

School testing seems to be in the news constantly, partly thanks to the “No Child Left Behind” act — NCLB, pronounced as nickel-bee by anyone who works in or covers schools, or works on laws that affect them.
There are a few things the general public should, but may not in all cases, know about tests. First, most probably know they measure students’ knowledge in the core subjects of language, math and science.
Something many may not know is that tests do OK at evaluating students’ grasp of core content.
I have it on good authority (I am married to a lovely woman who happens to be an excellent teacher) that the New England Common Assessment Program (NECAP, pronounced just like what a thug does to someone who owes money to a bookie) tests used by our local schools are written by teachers. That’s one reason the tests are valid and useful.
So, while quite possibly too much emphasis is placed on test results, especially when one gets into NCLB comparisons, testing itself does have a role to play in holding schools accountable.
One problem in comparing schools’ performance, however, is that not all student bodies are created equal. It’s no secret that overall schools in wealthier communities score better than those in less-well-off areas; performance among richer and poorer schools varies, but the trend is undeniable.
The other problem derives from NCLB’s insistence on “adequate yearly progress,” or AYP, which is measured most critically by the performance of a school’s less-well-off students.
Those students are identified as those eligible for free or reduced-cost lunches.
To start with, the goal of getting better every year is laudable on the surface, but sadly unrealistic. I wish we could hold Congress, which passed NCLB, to that same standard. Good luck with that.
We can all strive to improve, but there can be setbacks in our own circumstances, never mind in an institution the size of even a small school. Issues that include personnel changes and larger economic circumstances can complicate making progress year after year.
But a central issue is how NCLB calls for AYP to be measured. One might think that comparing test scores of the same students might be appropriate. For example, students are tested in elementary school, middle school and high school. It might be reasonable to check scores for the same group of students at each level to see if they are mastering the material and thus making progress.
That’s not how it works: Every year, different groups of students are measured against one another. Last year’s 11th grade scores are compared with this year’s, and next year’s will be compared with this year’s.
Now, I know that at every graduation, and I’ve covered 14 of the past 16 at Vergennes Union High School, each class is lauded as an unparalled group of intellectual giants, athletic marvels, tireless community servants and brilliant musicians. As I hear from Garrison Keillor, they’re all above average.
Privately, let’s just say school officials and teachers draw some distinctions. To paraphrase George Orwell, some classes are more equal than others, especially in smaller schools in a state like Vermont.
The result of comparing test scores of different groups? That’s right, scores naturally go up and down. Parents and community members can probably take an occasional dip in stride. If a spike upward never follows, however, cause for concern might be lurking.
On a larger scale, every few years there is much wailing and gnashing of teeth — especially among Republican politicians who send their kids to private schools — as U.S. standardized scores are compared to those of other industrialized nations. Calls for vouchers and charter schools inevitably follow, even though research now shows charter schools do not outperform public schools. The 2010 numbers, which came out in December, showed America in the middle of the pack.
Two things to keep in mind. Statistics proved hard to find, but the U.S. apparently tests a higher percentage of its public school students than other countries, thus skewing our scores downward.
For example, Shanghai, China, students rocked the tests this past year, stunning the world. But a Jan. 13 online L.A. Times article contains this nugget:
“The Shanghai students who triumphed in the tests enjoy the very best China’s uneven schools can offer. Their experience has little in common with those of their peers in rural schools, or the makeshift migrant schools of the big cities, not to mention the armies of teenagers who abandon secondary school in favor of the factory floor.”
Another item to consider comes courtesy of Valerie Strauss, a Washington Post education writer, in December. She quotes at length the late Virginia Department of Education research director Gerald Bracey on the misleading nature of the international test scores.
To whit, Bracey’s research showed that even though Americans ranked 24th out of 30 participating nations in math and 17th of 30 in science, the U.S. had “25 percent of all the high-scoring students in the world.”
Strauss pointed out, “Well-resourced schools serving wealthy neighborhoods are showing excellent results.”
Bracey wrote, “Comparing nations on average scores is a pretty silly idea. It’s like ranking runners based on average shoe size or evaluating the high school football team on the basis of how fast the average senior can run the 40-yard dash.”
On a personal note, I have some reservations about spending in my kids’ school system. If the UD-3 board could so easily lop off enough money to meet the Challenges for Change law, why wasn’t it done earlier?
On the other hand I am a product of a private school education. When I think back on the teachers I had and compare then to those who have taught and are teaching my daughters at Mary Hogan and Middlebury union middle and high schools, there really is no comparison.
Theirs have been much better, AYP or not.
Andy Kirkaldy may be reached at [email protected].

Share this story:

No items found
Share this story: