A recent Brookings Institution report seeks to measure the impact of advocacy groups on education reform policy. A review of that report published today finds that while it uses research methods that might be useful, the report’s shortcomings dictate that neither those methods nor the report’s conclusions should be accepted without additional research.
Robin Rogers and Sara Goldrick-Rab reviewed the report, Measuring and Understanding Education Advocacy, for the Think Twice think tank review project. The review is published by the National Education Policy Center, housed at the University of Colorado Boulder School of Education.
Professor Rogers teaches Sociology at Queens College and the City University of New York Graduate Center. Her research focuses on politics, policy, and philanthropy. Professor Goldrick-Rab teaches Educational Policy Studies and Sociology at the University of Wisconsin-Madison, and is founding director of the Wisconsin HOPE Lab, the nation’s only translational research laboratory focused on college affordability.
Measuring and Understanding Education Advocacy, written for Brookings by Grover J. (Russ) Whitehurst and others, offers two newer tools for measuring the impact of advocacy groups on education reform policy: Surveys with Placebo (SwP), designed to measure more accurately the influence of advocacy groups, and Critical Path Analysis (CPA), to identify which tactics were successful in influencing reform. The report contends that coordination among advocacy groups strengthens their impact and that the perceived impact of advocacy groups tracks closely with policy outcomes.
Rogers and Goldrick-Rab conclude that the two methodologies might have potential for education policy research. They point out, however, that both “are more limited than the report acknowledges.”
The reviewers explain that “the research is a small case study of three states, with a low response rate for the SwP and CPA based on advocacy groups’ self-reported tactics.”
The report, they add, lacks enough information about responses to the Surveys with Placebo or about the selection of the advocacy groups. This undermines that ability of the report’s readers to adequately assess either usefulness of its methods or validity of its conclusions.
“Finally, there is not a strong connection between the evidence presented in the report and its conclusions,” Rogers and Goldrick-Rab conclude. “We therefore caution against adoption of the methods or reliance on the conclusions presented in this report without significant further research.”