The race to fix America’s broken system of standardized exams.
It is a given that the new assessments will be administered on computers. This assumes two things: that students are comfortable working digitally, and that school districts have the necessary technological capacity. The first is probably a safe assumption; the second less so. Ask any state assessment director what he worries about most, and the answer is almost always some variation on “bandwidth.” In an informal survey taken by the common core R&D teams, more than half the states are already reporting significant concerns about capacity, including the number of computers available, their configurations, and their power and speed. This poses a dilemma: requiring too much technology may present insurmountable challenges for states, while requiring too little may limit innovation. Right now, the test makers are forced to essentially guess what the state of technology will be in 2014. An assessment director in Virginia, a state that already uses computer testing—but has not signed on to the common core—old attendees at a recent conference that when a rural school in his state charged all of its laptops one night, it overloaded the building ’s circuits and shut off the facility ’s heat.
Technological capacity can also narrow or enlarge what educators call the “testing window ”—the amount of time they need to schedule for administering exams. The new tests will already require more time than existing assessments, but if districts don’t have enough computers for everybody to take the tests in the same week, they will have to enlarge the window even more, spreading testing over many weeks. In that the case, students at the back end will enjoy an advantage because they will have had more time to learn the material being tested.
While the new assessments will undoubtedly be harder to score than the current fill-in-the-bubble ones, that doesn’t necessarily mean that the essays will be scored by humans. People, as you may have heard from your robot friends, need to be recruited and trained; they are subjective; and, worst of all, they are slow. PARCC, for one, says it will bypass these fallible creatures as often as possible: it wants items scored very quickly by computers to maximize the opportunity for the results to be put to good instructional use.
Because of recent advances in artificial intelligence, according to a 2010 report by the ETS, Pearson, and the College Board, machines can score writing as reliably as real people. That is, studies have found high levels of agreement with actual humans when those humans are in agreement with each other. (Given how often humans disagree, even the ETS concedes this is at best a qualified accomplishment.) Machines can score aspects of grammar, usage, spelling , and the like, meaning that they are decent judges of what academics call the rules of “text production.” Some programs, according to the ETS, can even evaluate semantics and aspects of organization and flow. But machines are still lousy at assessing some pretty big stuff: the logic of an argument, for instance, and the extent to which concepts are accurately or reasonably described.
By way of making assurances, the ETS says that machines can identify “unique” and “more creative” writing and then refer those essays to humans. Still, the new tests will be assessing writing in the context of science, history, and other substantive subjects, so machines must somehow figure out how to score them for both writing and content. Likewise, machines struggle to score items that call for short constructed responses—for instance, an item that asks the student to identify the contrasting goals of the antagonist and the protagonist in a reading passage. A machine can handle this challenge, but only when the answer is fairly circumscribed. The more ways a concept can be described, the harder it is for the machine to judge whether the answer is right. (For now, both consortia are calling for computer scoring to the greatest extent possible, with a sampling of responses scored by humans for quality control.)
The risk of all this, of course, is that in pursuit of a cheaper, more efficient means of scoring , the test makers will assign essays that are inherently easier to score, thus undermining one of the common core’s central goals, which is to encourage the sort of synthesizing , analyzing , and conceptualizing that only the human brain can assess. Flawed and inconsistent though they may be, humans can at least render an accurate judgment on a piece of writing that rises above the rules of “text production.” Maybe this is why all those high-achieving countries that use essay-type tests to measure higher-order skills use real people to score those tests. “Machine-scored tests are cheap, constitute a very efficient and accurate way to measure the acquisition of most basic skills, and can produce almost instant results,” says Marc Tucker. “But they have a way to go before they will give either e. e. cummings or James Joyce a good grade.”
There might be one other non-robotic way to bring down the cost of scoring: assign the task to local teachers instead of test-company employees. According to the Stanford Center for Opportunity Policy in Education, the very act of scoring a high-quality assessment provides teachers with rich opportunities for learning about their students’ abilities and about how to adjust instruction. So teachers could score assessments as part of their professional development—in which case their ser vices would come “free.” Teachers, however, might find fault with this accounting method.
There’s no doubt that the joint common core effort provides opportunities for significant economies of scale: individual states can now have far better assessments than any one of them could afford to create on its own. But the fact remains that quality costs. The federal stimulus funding covers the creation of the initial assessments, but the overall cost of administering the tests dwarfs the cost of creating them. In addition, the stimulus money runs out in 2014, which is only the first assessment year. The Pioneer Institute, a right-leaning Boston-based think tank that has been critical of the common core standards, has put the total cost of assessment over the next seven years at $7 billion.
Whether that number proves accurate or not, it’s clear that the new testing regime represents a huge investment that most states haven’t yet figured out how to pay for. The current average cost per student of a standardized state test is about $19.93, with densely populated states paying far less and sparsely populated states paying far more. The SBAC estimates a per-student cost of $19.81 for the new summative tests and $7.50 for its optional benchmark assessments. But the Pioneer Institute says in a recent report that those numbers are unrealistically low given the consortium’s ambitious goals. PARCC, which has scaled back its original plans, projects combined costs for the two summative tests of $22 per pupil.
Feed the Political AnimalDonate
Washington Monthly depends on donations from readers like you.