The Unique Challenges of Afterschool Research

A Practical Guide for Evaluators and Practitioners
Lizzie Murchison, Katie Brohawn, Cheri Fancsali, Andrea D. Beesley, and Erin Stafford

GuideforEvalsAndPractitionersFunders and policymakers are increasingly recognizing the afterschool field for its vital role in supporting the social and emotional growth and academic achievement of school-age youth. Although this recognition is welcome, it often comes with increased expectations for high-quality research demonstrating the value of programming. To satisfy these demands and make the most of funding opportunities, practitioners must develop strong partnerships with external evaluators. However, developing afterschool evaluation partnerships that work well for all parties is often far more difficult than program directors or evaluators anticipate.

When research is conducted in K–12 schools, educators often bring some experience in assessment methods, and researchers often have at least a basic knowledge of pedagogy. In contrast, in the out-ofschool time (OST) field, program directors with little formal research experience are frequently paired with evaluators who lack experience in OST programs. This research-practice gap, if not addressed, can translate into frustrating evaluation experiences for practitioners and evaluators alike. Program directors may finish an evaluation feeling that they did not learn anything new or that the study was entirely for the benefit of the funder. Evaluators may find themselves stymied by data collection issues and communication challenges they are unprepared to solve.

The literature offers little practical guidance about developing and conducting research in OST settings, beyond instruments for possible use in evaluation. This article addresses this gap by providing candid advice for evaluators seeking to transition from K–12 to afterschool research. This advice may also help program directors and other stakeholders who want to make the research process work more effectively for them. We aim to help evaluators understand what is and is not possible (or advisable) in afterschool evaluations and to help practitioners serve as more effective partners by anticipating evaluator assumptions and other challenges that can derail a study.

As authors, we bring a variety of experience in researching and evaluating OST programs. We have conducted mixed-method evaluation studies for general programmatic improvement as well as rigorous randomized control trials for federal agencies, including the National Science Foundation and the U.S. Department of Education. Some of us have studied community-based afterschool programs generally, while others have concentrated on specific initiatives in STEM, literacy, and social and emotional learning. Many of the afterschool programs we have researched have taken place in schools, though a few have been located in spaces such as community centers, museums, libraries, and maker labs. This article addresses a broad spectrum of research designs, from formative assessments to confirmatory analyses, in varied OST settings.

In our experience, regardless of the intended audience for the report or the level of rigor in the study design, evaluators transitioning to afterschool are challenged by a common set of issues related to data collection and communication. This article addresses those challenges. First, we describe how afterschool is unique—and particularly how it is different from K–12 education. Next, we recommend ways to take those unique features into account when designing and implementing an afterschool study. The final section addresses best practices for forming and maintaining strong partnerships between evaluators and practitioners to produce results that meet the needs not only of funders but also of the program and its staff, students, and families.

The Unique Context of Afterschool Programs

Evaluators with experience implementing K–12 evaluations often approach afterschool programs with expectations and recommendations framed by that experience. However, there are a number of contextual factors unique to afterschool that should alter this calculus. Assumptions from K–12 experience about staff capacity, data collection procedures, and funding stability may not apply to afterschool programs. Imposing those expectations can result in significant implementation challenges and can ultimately limit the conclusions that can be drawn about the efficacy or impact of the program. To avoid these challenges, evaluators must adjust their expectations to fit the unique context of afterschool.

Expectations About Staff Participation
Afterschool programs typically run for one to four hours each afternoon. Positions at these programs are often adopted as second jobs or part-time jobs coupled with educational pursuits. Most staff are hourly employees; they are paid for direct service to students and may not have paid time for evaluation activities such as completing surveys or participating in interviews. Without a firm directive from the program director on how and when staff are to complete data tasks, limited staff capacity can become a real barrier to evaluation planning and implementation.

Another challenge is that few programs assign organization email addresses to line staff. Younger workers, who make up the bulk of frontline staff, often prefer to communicate with their supervisors via text message. In these circumstances, evaluators may have a hard time locating valid email addresses to which staff will respond outside of program time.

Expectations About Data Collection
In a school, an evaluator can enter a homeroom class to administer a survey and expect that the large majority of students will be present to complete it. By contrast, finding appropriate times to collect data in afterschool programs can be a challenge. Afterschool programs are usually voluntary, and attendance rates are lower than in school. Furthermore, students may be present for part of the afterschool session but arrive late due to school obligations or be picked up early due to conflicting family schedules. This uneven attendance can make it difficult for evaluators to achieve high response rates or match pre- and post-participation respondents.

Collection of existing administrative data can be equally challenging. In K–12 research, accountability mandates in most districts mean that data on metrics like school attendance and enrollment are typically quite clean and comprehensive. However, the data may not be available to afterschool researchers; securing data sharing agreements can take time, resources, and consents that researchers may not be able to gather in the period allotted. Meanwhile, although many afterschool programs have enrollment and attendance records, they are often not as systematic as school or district data. For example, attendance data might be collected in paper records that must be entered into a database. Issues of data availability and quality, such as missing records or inconsistent data collection, can limit evaluators’ ability to use afterschool program records. Even when the data are clean, they are not guaranteed to be readily accessible. For example, in New York City, state test scores are housed centrally, but there is a four- to six-month lag between when individual schools and families receive results and when researchers can gain access to the scores.

Expectations About Stability
In both school and afterschool, the time between applying for funding and receiving it can be long. However, in K–12 education, evaluators can be confident that, even after such a time lag, the school will still be running, and most of the staff will still be there. Funding for afterschool is far less stable. Loss of a single critical funder can force programs to suspend operations on short notice, making retention of partner sites difficult. Funding instability also means that staffing is not always solidified at the beginning of the school year. Group leaders are often hired shortly before each semester, once enrollment numbers are known. Programs thus may not be able to commit staff to participate in a study months or even weeks in advance.

Even among well-funded afterschool programs, the turnover rates of both staff and students are substantially higher than in schools. Afterschool programs traditionally employ many staff who view their afterschool job as a stepping-stone in their career, as opposed to a career in and of itself. Afterschool employees who are concurrently working toward a college degree often change their availability from semester to semester. Student attrition rates are also often high—and they increase substantially as students move from elementary to middle to high school (Lauver, Little, & Weiss, 2004), when students gain independence and have more options for their afterschool hours. High levels of student attrition pose limitations to multi-year study designs, as evaluators cannot assume that most of their sample population will remain enrolled over time.

The Nuts and Bolts of Designing and Implementing a Great Study
The unique challenges of the afterschool space require investigators to take a flexible and hands-on approach to evaluation. Too often evaluators assume they can cajole afterschool programs into operating with the same level of planning and structure as schools, only to be disappointed by the results. A more successful strategy is to accept and plan for complications like funding instability, student and staff attrition, and incomplete data. By anticipating these obstacles, evaluators are much more likely to successfully mitigate challenges and protect the validity of their findings.

Determining Study Duration and Sample Size
A good first step when developing a practical study design is to determine whether multi-year data collection is necessary. Although most afterschool providers do target long-term developmental outcomes, most afterschool evaluations are not set up to track student progress over multiple years. This discrepancy is due, in part, to the challenges of managing high year-to-year attrition and inconsistent attendance. For example, afterschool providers may theorize that the impact of their program is strongest when students have been enrolled for at least three years, but that theory could prove impossible to test if a large and steady cohort of returning students cannot be identified.

To determine the best duration and sample size for an afterschool evaluation, researchers should look to existing data and make careful estimates of expected attendance and attrition patterns. The fact that student attrition increases substantially as students get older must be taken into account when considering expected year-to-year participant retention rates and acceptable thresholds for sample sizes. For example, a study design that assumes 20 percent year-to-year attrition may be suitable for an elementary program but unrealistic for a middle school program. Similarly, evaluators have to anticipate some attrition at the site level, as noted above. Given the uncertainty caused by student attrition and funding instability, program impacts often are best captured by study designs that span a single academic term or year, rather than multiple years.

Beyond attrition, afterschool attendance can also vary considerably. Some programs have high enrollment numbers but extremely inconsistent dosage among participants—a fact that some providers may not know to flag in the early planning stages. If a site is meeting dosage requirements for the student population as a whole but individual student attendance is spotty, a longitudinal approach with three or more data points over the course of a year may be useful. For all types of evaluation, this design provides a fairly comprehensive picture of what’s happening on the ground. In particular, evaluators undertaking a rigorous evaluation can use this approach to employ growth curve modeling, which is flexible enough to capture students who miss one or more data points.

Selecting Evaluation Tools That Minimize the Burden on Programs
Just as evaluators must familiarize themselves with afterschool attendance patterns to determine sample size and study duration, so too must they consider individual program capacity when selecting assessment tools. Many afterschool practitioners will naturally expect an evaluation to use a pre-post survey or quiz of some sort. Researchers should be prepared to discuss a variety of methods and data collection options with staff, including retrospective surveys, activity observations, focus groups, interviews, fidelity rubrics, collection of secondary data such as school grades or state test scores, and assessments that do not rely on student self-report. Many of these approaches can be implemented without interrupting or taking time from programming, a common concern among program directors.

If the evaluation does require students to complete a survey or other written assessment, evaluators should consider the length of the instruments and the frequency of administration. With limited time in each afterschool day to accomplish their goals, practitioners may (rightly) balk at any written assessments that take more than 20 minutes. Tools that require more time should be selected only if administration can be broken up into multiple days, and then only if attendance in the program is fairly regular.

Once the methodology has been agreed upon, evaluators must consider whether an existing tool can be utilized or a new one must be created. Because afterschool programs are often designed around unique or “outside-the-box” solutions to youth development challenges, practitioners may assume that no existing tool could adequately capture the innovative work they are doing. However, evaluators should surface and evaluate existing tools, as they may expand the opportunities to find high-quality comparison data. With regard to format, it may be necessary to offer programs the option of completing assessments with paper and pencil, as many providers have limited access to computers and reliable internet connections.

Developing an Effective Data Collection Plan
Another critical component is an effective data collection plan. A solid plan is particularly important when the design includes student or staff surveys, which tend to require considerable logistical coordination on the part of evaluators, site managers, staff, and students. Afterschool programs often manage gaps in staffing, facilities, and resources with little notice. Activity schedules can shift at the last minute in response to changes in classroom availability, access to computers or other school equipment, or the need for available staff members to cover different classrooms to meet staffing ratio requirements. If the evaluation permits, having external evaluators on site to oversee survey administration can help ensure that the correct students are being assessed and that the directions and environment are consistent.

When evaluators can’t administer surveys themselves, designating a point person for data collection at each site can be useful. To ensure consistency of administration and collection methods across sites, evaluators can train the designated point people in a webinar that covers each component of the data collection process. Evaluators can review consent forms and answer questions, provide clear instructions on survey administration, demonstrate how to enter data into electronic forms or spreadsheets, and review the administration timeline. They should be explicit about expectations for exactly who is expected to complete the survey and the minimum number of surveys necessary for a representative sample. When reviewing administration protocols, evaluators should emphasize that participation in assessments or surveys is voluntary, provided this is true. They should coach program staff on how to respond to students who do not wish to participate so that staff do not inadvertently coerce participation. Providing a script for staff to read before survey administration can help mitigate common issues. Evaluators can also offer tips for selecting the best time and place for administration—at a time when students can focus (and therefore not just before snack or pick-up time) and in a space where they can read and write comfortably.

When evaluators need to be physically present for qualitative data collection, such as program observations or interviews, one prudent step is to send reminder emails. Having a Plan B ready when schedules change at the last minute is also helpful. For example, evaluators might identify early on several potential visit dates or arrange for staff members to videoconference into interviews. Staying mindful of the time program directors need to coordinate multiple evaluation tasks, evaluators should minimize the number of separate requests they make.

Defining (Realistic) Timelines
After assessment tools have been identified but before the evaluation plan has been finalized, evaluators should find out whether the afterschool program falls under the jurisdiction of any school district or other institutional review board (IRB). Though many afterschool programs are not subject to such regulations, some are. Evaluators may also have their own organization’s IRB process to contend with. A single evaluation thus may need to comply with two or more overlapping IRB processes, which will govern what types of parent permissions or consent are required. The need for IRB approval can significantly affect a study’s timeline. Evaluators should, if possible, begin the application process several months before school partners begin compiling their afterschool enrollment packets, typically in August, so that consent forms or other required paperwork for parents and guardians can be included.

Another factor that affects the schedule is the time it takes to request and receive access to existing student records. Some school principals are extremely reticent to share student records, even with parental consent and even when the data are being used entirely for internal programmatic improvement. Factoring such negotiations into the evaluation timeline is key to successful data collection.

Communicating With Parents and Participants
After evaluators have secured buy-in from program leaders and school or district officials, they will need a solid plan for communication with parents and students to ensure a strong launch. Keep in mind that, when today’s parents were in elementary school, afterschool providers typically had much more limited activities and responsibilities; they opened the gym, provided enriching activities, and kept a fresh supply of Band-Aids handy, but no one was holding them accountable for students’ academic gains. Few parents are aware that funders require afterschool programs to demonstrate quantitative impact, and many are protective of their children’s personal data. They may be wary when afterschool providers ask for consent to gather data or to use existing records. Evaluators should take pains to explain to both parents and students exactly what the programming involves, how its impact will be assessed, and how the results will be used. All written communications for parents should be translated into languages and reading levels that are accessible to all. When this is not possible, competent staff should be trained to communicate the information orally. Creating explicit connections between the evaluation and the quality of the program is a first step toward building trust for a successful evaluation.

Research-Practice Partnerships
Clear communication not only with parents but also with program leaders and staff is key to the success of afterschool evaluations. In any research or evaluation, the researchers and the programs they study must be in sync, in terms of both goals and logistics. However, strong alignment can be difficult to achieve in afterschool research when the requirements of a rigorous, tightly controlled study design are at odds with a program implementer’s priorities. For example, a randomized control trial design requires that students be randomly assigned to the program or a control condition. This structure can be challenging for program implementers who are accustomed to serving as many students as their space and budget allow. Many site directors are used to having the flexibility to adjust programs to respond to individual student needs. However, that degree of responsiveness is not always possible in a rigorous study, where specific inputs are defined in the logic model. In addition to these challenges, afterschool leaders may worry that negative evaluation findings will affect funding or that data collection will steal precious time and resources from direct service.

Close partnerships between evaluators and afterschool stakeholders can mitigate these issues and increase the quality and usefulness of the research. The partners should address early on any disconnects between their goals. A recent flurry of activity in social policy research on research-practice partnerships (Tseng, Easton, & Supplee, 2017) reflects our own experience as evaluators. Both the theory and our practice show that the input of practitioners keeps the research grounded in reality, increases its relevance and usefulness, and ultimately enhances its ability to improve outcomes (Coburn, Penuel, & Geil, 2013). Below we outline several strategies that are helpful in developing strong partnerships between afterschool practitioners and evaluators.

Leveraging Existing Afterschool Networks
As evaluators begin to establish relationships in the field, they should scan the local area for afterschool networks. Though afterschool programs do not have the built-in infrastructure and support of local and state education agencies, many states and cities do have afterschool networks that support and connect programs. These networks can serve as community liaisons for researchers by helping them, for example, to make initial contact with potential research sites and then gain buy-in from stakeholders. They may assist evaluators in collecting administrative data from state and local education agencies or provide technical assistance to help programs implement a particular intervention. Furthermore, networks can help evaluators understand the local context so they can reflect that context when communicating with program staff and participants. Once the relationship between an evaluator and a community organization has been established, the role of a network in an evaluation partnership can vary. Representatives of the network may serve on a voluntary advisory board, or the network can be a full-fledged partner with responsibilities such as data collection, financial support, program delivery, or communication with sites.

Including Practitioners From the Beginning
After establishing initial relationships, partner organizations turn to collaboratively articulating the program’s activities and goals and designing the evaluation. Given the constraints on their time and resources, many afterschool leaders need help to understand why they must build in time at the front end to help researchers plan the evaluation. They need to know that this early investment in the work is crucial to executing an evaluation whose results they can use to assess success and guide decisionmaking.

Evaluators and program leaders should work together to document the program’s theory of change— what the program is trying to change and how—and its theory of action—the steps the provider takes to implement the theory of change. Having a wellarticulated theory of change and theory of action helps stakeholders to achieve a common understanding of the program’s goals, to surface assumptions about the program and its participants, and to highlight any contextual concerns that need to be addressed for the program to be successful. It also helps with the next step, which is to identify and agree on appropriate and realistic outcomes and indicators of program success.

Many larger afterschool organization are inclined to limit strategic discussions about research and evaluation to the director level. We recommend also including afterschool site coordinators. They can speak both to the mechanisms that drive a program and to the realities of practice. They see firsthand how programming operates on the ground and can describe the reactions of—and outcomes for—participants. In addition, practitioners know what kinds of study results would be most beneficial. This information can guide the development of research questions, design, and methodology. Working with practitioners in the early stages of a project to define the goals and methods of the research generates staff buy-in, improves the quality of the study, and helps ensure that the results are relevant and useful.

Engaging Funders and Staff in Dialogue on Program Measures
Once a program’s theory of change and expected outcomes have been clearly articulated, the discussion naturally turns to the practicalities of assessment. Providers often find it challenging to translate theorized outcomes into measures that adequately capture the richness of what an afterschool program offers. Many programs target broad skill or mindset changes, such as workforce readiness or innovation and creativity, that may seem abstract or undefined and therefore difficult to measure through an evaluation. To ensure that both program staff and funders are comfortable with and support the measures selected, both groups must be included in identification of targets and measures from the beginning.

Evaluators must be prepared to deal with the perceived imbalance of power between practitioners and funders to ensure that program plans and evaluation designs meet the needs of both parties. Sometimes funders require outcomes that are beyond the influence of the afterschool program, for example, expecting afterschool academic or social and emotional supports to change school-day academic outcomes, often in a single year and without controlling for outside factors. On the other side, sometimes programs overstate their intended impact in a proposal to increase their chances of being funded. In either case, the program and its evaluation are not set up for success from the start.

Evaluators are well positioned to broker honest conversations between program staff and funders during program planning and evaluation design. They can proactively tackle crucial questions: What are realistic program outcomes given the duration of the intervention? What outside factors might influence these outcomes? What evaluation design best suits the needs of the program? Coming to a shared understanding early in the planning process of realistic outcomes and how to measure them can address the concerns among program staff that they might be held to unrealistic expectations or unfairly judged in ways that will affect their funding.

Defining Roles and Communicating Regularly

Another step evaluators can take to help prevent conflicts is developing a memorandum of understanding (MOU) that outlines each partners’ roles and responsibilities. In this document, researchers and practitioners make explicit their underlying assumptions and expectations before the work begins. MOUs should address such issues as who is responsible for collecting data, access to administrative records, procedures for obtaining consent for study participation, timelines for data collection and reporting, and access to staff and students to conduct surveys or program observations.

In addition, evaluators and program leaders should build in opportunities to discuss the project and emerging findings. Brief regular check-ins can confirm that the evaluation focus and instruments stay aligned with the program’s theory of change. They can also build trust between partners and enable practitioners to give and receive timely feedback on the data.

Focusing on Capacity Building

Foremost in all of these strategies is the idea that research-practice partnerships are mutually beneficial relationships. This assumption helps both parties make sure that the research is not something that is “done to” programs. For many afterschool programs, the opportunity to develop internal evaluation capacity can be a strong motivator. Collaborating with evaluators builds staff capacity to conduct research and use data to inform practice. For example, evaluators can help program staff develop templates and data collection instruments, set up data management systems, and create processes for analyzing and reflecting on the policy and practice implications of findings. Evaluators may also build in opportunities to review program data systems alongside program staff to see what data are being collected from which sources and whether any processes can be tweaked to gather the same or similar information more efficiently while maintaining data accuracy and integrity. These strategies, which are useful for research in any context, can be particularly helpful in the afterschool arena, where practitioners may have little experience with research and few resources to commit to data collection and analysis.

Bridging the Gap
Evaluators who study school-day initiatives can look to a robust body of literature to determine best practices for study designs, sample sizes, limitations, and so on. When conducting studies of afterschool programs, evaluators may expect to use the same metrics and strategies they would use for K–12 programs. However, the differences between school and afterschool settings require evaluators to shift their assumptions. Designing afterschool studies using school-day approaches can prove—and has proven—disastrous, despite good intentions. Although school and afterschool programs often have the same goal—to improve outcomes for the youth they serve— the mechanisms by which they achieve this goal and the contexts in which they operate are quite different. Therefore the evaluation approaches must also differ.

To continue to be seen as worthy of investment, the afterschool field needs to develop strong data-driven evidence documenting improved youth outcomes and illuminating the specific strategies that are most effective. Strong research-practice partnerships are necessary for evaluators to understand what makes this educational space unique. Only by approaching afterschool evaluations with an explicit focus on collaboration and context can evaluators hope to bridge the gap between research and practice.

Lizzie Murchison, MA, is the senior research associate for ExpandED Schools, where she manages a portfolio of afterschool evaluation projects focused on elementary literacy, middle school STEM, and preservice teacher training.

Katie Brohawn, PhD, as vice president of research at ExpandED Schools, helps to establish research priorities to inform policy and support data-driven continuous improvement, especially in expanded learning time.

Cheri Fancsali, PhD, is the research director at the Research Alliance for New York City Schools. She has led numerous studies of school-based and afterschool programs, particularly focusing on STEM education and teacher professional development.

Andrea D. Beesley, PhD, is a principal education researcher at SRI International. She focuses on motivation and engagement in learning, evaluating STEM programs in and out of school, and working with rural populations.

Erin Stafford, MA, is a senior research associate at Education Development Center with expertise in out-of-school time. She works with government agencies and nonprofit organizations to research and evaluate questions of policy and practice.

Our website uses cookies to enhance your experience. By continuing to use our site, or clicking "Continue", you are agreeing to our privacy policy.

Tags: ,

ASM logoNIOST reverse logo

The Afterschool Matters Initiative is managed by the National Institute on Out-of-School Time, a program of the Wellesley Centers for Women at Wellesley College

Wellesley Centers for Women
Wellesley College
106 Central Street
Wellesley, MA 02481-8203 USA

asm@niost.org
781.283.2547

Our website uses cookies to enhance your experience. By continuing to use our site, or clicking "Continue", you are agreeing to our privacy policy.
Continue Privacy Policy