Lessons from PARCC’s Sudden Rise and Rapid Fall: Budgets and Data
We discussed last week how PARCC, the standardized test born of the 2009 Race to the Top program sponsored by the U.S. Department of Education, has gradually fallen out of favor with schools and why. (We specifically discussed how it, like many standardized tests, can disrupt instruction and subject students to long slogs of test prep, while reducing teachers to proctorship).
But PARCC is not a lone offender, and its problematic aspects may help inform similar standardized testing practices for a brighter future. Below, a few more lessons learned from the PARCC predicament.
Testing should not overwhelm school budgets
Time and learning opportunities are not the only educational facets negatively impacted by the PARCC. School budgets have felt the pangs of the technology-based assessment as well.
In many cases, school districts in PARCC consortium states have been forced to rapidly upgrade the availability of student technology and school-wide networking capabilities in order to allow students reliable access to TestNav (the PARCC’s online platform developed by Pearson). On the surface, this appeared to many as a fringe benefit – implementing the PARCC increased the availability of tools and infrastructure necessary to deliver 21st century learning experiences for students. In reality, the forced technology upgrades have taken huge bites out of already tight budgets and forced districts to make tough (otherwise avoidable) decisions.
The costs don’t stop with infrastructure. The PARCC assessment itself costs just under $30 per student. While that is neither the cheapest nor most expensive standardized test offered by states, the PARCC carries a significant price tag for its materials, scoring, and reporting. Furthermore, in many states PARCC is taken by a broader grade range of students than the state tests it has replaced; so even in cases where the PARCC is a cheaper test, the total financial burden is often higher.
Unsurprisingly, nearly half of the states that have dropped out of the PARCC consortium have cited cost-related reasons.
Testing data must be available and useful
While initially much was made of PARCC’s rigor and targeted approach to evaluating student growth, skills, and understandings, the resulting data has been both inconsistent and, at times, hard to come by.
The turnaround time from the close of the testing window to when scores are available to schools and families can take months and, in most cases, has spilled into the following school year. This can be problematic for a number of reasons:
- Teachers lack the data to plan for incoming students in September
- Parents and students do not have access to information that could be used to make decisions for interventions, enrichment, or course selection prior to the next school year
- Annual teacher evaluations that include test results remain incomplete even after the subsequent school year has started
Then there is the matter of the test’s validity. While many of the remaining PARCC consortium states remain firmly atop the U.S. News and World Report rankings, students are struggling to pass the PARCC. States like Maryland, New Jersey, and Illinois all reported less-than-stellar results in 2017. Many of PARCC’s issues are not uncommon to the greater high-stakes testing debate. That said, the PARCC’s lauded promises of a better, fairer, more useful standardized test have quickly fallen away and now serve as an Ozymandias-esque warning to those tests that will surely follow.
If standardized tests are to remain part of the educational landscape, they must respect the time, budgets, and goals of the schools that are to implement them.
Read more:
PART ONE: Lessons from PARCC’s Sudden Rise and Rapid Fall