Eliminating Cultural Biases in Pursuit of High-Quality Education
August 22, 2018
How can we ensure that students have high-quality learning experiences during afterschool programs? We check the data, of course. To make this possible, the National Institute on Out-of-School Time co-developed the APT (Assessment of Program Practices Tool) -- a self-assessment observation tool that afterschool programs can use to identify where their program stacks up when it comes to organization, learning, skill building, and other key aspects that are tied to positive youth outcomes. But developing the tool was just the first step.
Beginning in 2010, researchers from the Wellesley Centers for Women have worked to continuously study and improve the effectiveness and accuracy of the APT. A 2013 study that introduced video clips to improve rater accuracy identified a gap between the rating scores entered by Black and White participants using the tool.
In a recent follow-up study, Linda Charmaraman, Ph.D., research scientist at the Wellesley Centers for Women, and research associates Ineke Ceder and Amanda Richer, M.A., with funding from the William T. Grant Foundation, set out to examine the reasons behind that scoring gap and eliminate cultural biases in the assessment tool. This is especially important due to the “high stakes” ways that the tool has been used in recent years.
“The APT tool has been increasingly used by external reviewers to evaluate an educational program’s quality, and the outcome of those reviews could influence funding or have other financial implications for that program,” said Dr. Charmaraman, “By reducing cultural biases within the tool, we can ensure that the manner in which raters are assessed for accuracy is both fair and reliable, which will allow the APT tool to accurately reflect a program’s quality.”
Charmaraman, Ceder, and Richer consulted a diverse group of experienced APT users to review the video clips from the previous study phase and remove those that did not achieve consensus scores. A new group of diverse participants who were afterschool professionals then rated the remaining video clips with the APT tool. The results showed that there was no longer a gap between the rating scores entered by Black and White rater participants. In fact, there were no significant scoring differences between raters by race, gender, age, region, or experience with different ages and sizes of afterschool programs.
Richer presented these findings at the April 2018 Annual Meeting of the American Educational Research Association (AERA) in New York, NY. She shared the steps that were taken to reduce cultural bias in the APT assessment tool and offered afterschool practitioners practical guidelines on becoming more aware of cultural biases when rating programs for quality.
One way for practitioners to examine their implicit cultural biases pertains to urban vs. suburban programs. “Ask yourself,” said Dr. Charmaraman, “Am I consistently giving higher ratings to programs that are small, organized, and well-resourced? Am I not recognizing programs with low budgets that have to be more creative and resourceful for their activities?”