Accurately measuring a school’s contribution to student growth

ESSA Strategy Call

Accurately measuring a school’s contribution to student growth

Our first weekly ESSA Strategy Call focused on Gateway City priority 1: A formal accountability system that creates a level playing field for urban districts when describing performance by isolating each school’s contribution to student learning. 

Accurately capturing school performance is largely about the model Massachusetts adopts to statistically control for demographic variation across schools. To help us consider the tradeoffs we face in selecting a model, we were joined by Andrew Rice, Vice President of Research and Operations at Education Analytics Inc. Andrew’s a national leader on methodologies to measure student growth.

On our call, he emphasized that identifying the contribution of schools to learning is absolutely critical in designing a system that is primarily designed to differentiate schools and hold them accountable for their performance. If we don’t get it right, among other problems, we risk increasing costly turnover of teachers and school leaders.

According to Andrew, the best way to tell how much learning a school is providing is with a value-added model that incorporates information on student characteristics, such as economically disadvantaged, English language proficiency, and special education status. In his words, “The most scientifically accurate model is one that uses as much data as possible.”

Massachusetts currently uses a Student Growth Percentile, or SGP, model that does not include these statistical controls. While Andrew believes our SGP model is an excellent method for determining how much student learning has occurred over the school year, it is not the best way to isolate a school’s impact on student learning.

He says “SGP models are designed to estimate the growth of students rather than the impact of schools on kids. A good value-added model can start to measure the impact of the school on the kids, trying to get rid of the impact of the context that those kids are in. In very inclusive urban schools you want to be very careful that you’re not creating growth measures that pin on the schools the societal context that those kids are in.” 

Why then do Massachusetts and many other states utilize the SGP mode? Andrew believes it was a relatively easy first step for states to take to capture growth, and a vast improvement over evaluating schools based on proficiency scores alone.

To get accountability systems that provide a fair indication of school performance, we need to take another step forward and build value-added models. Moreover, we should use a value-added approach with other measures, not just test scores. For instance, the graduation rate and chronic absenteeism measures the Department of Elementary and Secondary Education have proposed could also be adjusted using a value-added model.

Listen (above) to our conversation with Andrew. For a more in-depth look at growth models, we also have video of Andrew presenting to the Next Generation Accountability Learning Community last summer.

Meet The Authors

Ben Forman

Research Director, MassINC

Maureen McInerney

Public Affairs Associate, MassINC

Our sponsors