Education Decision Makers Need More Timely, Actionable Data

December 13, 2016

CEPR Executive Director Jon Fullerton and Proving Ground Director Bi Vuong discuss the need to utilize short-term intermediate measures of success to effectively manage interventions within school districts in the Stanford Social Innovation Review.

...

A second challenge facing education agencies is the lack of adequate infrastructure needed for steady, careful measurement and evaluation. There are really three problems here. First, to evaluate whether or not something “worked” on an ad hoc basis typically takes days (or months) of data processing and complex statistical programming on the part of analysts. This severely limits the number of programs and interventions that can be evaluated on a timely basis. Second, most education agencies are not large enough to conduct rigorous evaluation. The number of classrooms or schools is simply too small for analysts to determine whether changes in outcomes are the result of a particular program or just random noise in the data. As a result, it may not even be possible to determine whether certain programs are helping to improve outcomes. Third, agencies often do not have a network of other agencies from which they can learn and consistently improve their practice. Because they don’t know how interventions are working in other agencies, they may either continue implementing a program that is not working or give up on a program that might work if it were tweaked.

Here again, there are some signs of hope. New tools and networks are being developed specifically to help education agencies make faster progress. The Center for Education Policy Research’s Proving Ground project is organizing a network of 13 participating districts and charter management organizations to overcome the scale issue, allowing even relatively small systems to gain insight into what is working and what is not. In less than one year, members of the Proving Ground network have engaged in a full continuous improvement cycle: Agencies in the network have seen the impact of their selected interventions, individually and collectively strategized on how to improve implementation, and rolled out new strategies to help improve student progress. Twelve out of the 13 sites have made management decisions that affect program implementation across the network. 

...

Continue reading at the Stanford Social Innovation Review