At a glance
The National Asthma Control Program, in partnership with the U.S. Environmental Protection Agency, has created a five-part webinar series on program evaluation basics.
Series contents
Nationally recognized experts present a general introduction to program evaluation; note challenges in conducting useful evaluations as well as methods for overcoming those challenges; and introduce the six steps of the CDC Framework for Program Evaluation, using examples that are relevant to state partners of the National Asthma Control Program.
Webinar order and title:
Webinar 1
Top Roadblocks on the Path to Good Evaluation – And How to Avoid Them
Presented by Tom Chapel, MA, MBA, CDC Chief Evaluation Officer (Acting)
Tom Chapel, a nationally recognized evaluation expert, introduces CDC’s approach to program evaluation. After making the case for a utilization-focused evaluation framework, Tom presents some typical challenges programs encounter when trying to do good program evaluation. He then offers practical solutions to surmount these challenges, resulting in evaluation designs that are likely to yield information that can guide program improvement. The webinar concludes with a reminder to keep program stakeholders—intended users of the evaluation findings—at the center of any evaluation.
Duration: 25 minutes
Tutorial 1A
Focus On: Walking Through the Steps and Standards
Program improvement is at the heart of CDC's Framework for Program Evaluation. In this tutorial, Tom Chapel describes each of the six steps of the Framework and the four evaluation standards. He then explains how proper implementation of the steps and standards can help you understand your program and make evaluation design choices that will lead to greater use of your findings.
Duration: 10 minutes
Webinar 2
Getting Started and Engaging Your Stakeholders
Presented by Leslie Fierro, MPH, independent evaluation consultant to the National Asthma Control Program (NACP) and Carlyn Orians, MA, Battelle Centers for Public Health Research and Evaluation
Leslie Fierro and Carlyn Orians describe the initial steps of designing and implementing a program evaluation plan. They discuss the who, why, and how of engaging stakeholders in the program evaluation process (Step 1 in the CDC Framework) and present examples drawn from asthma program evaluations at the state and local levels. Leslie also explains the differences between research and evaluation.
Duration: 50 minutes
Webinar 3
Describing Your Program and Choosing an Evaluation Focus
Presented by Tom Chapel, MA, MBA, CDC Chief Evaluation Officer (Acting)
Tom Chapel describes the importance of a clear program description in program evaluation and explores the concept and uses of logic models in "describing the program" (Step 2 in the CDC Framework). He then moves to focusing the evaluation design (Step 3), drawing on examples from a previous Webinar to illustrate the process of establishing priorities for an evaluation. Tom acknowledges the reality that few programs have the resources to evaluate every aspect of the program and so must prioritize the most salient evaluation questions for key stakeholders.
Duration: 60 minutes
Tutorial 3A
Focus On: Thinking About Design
Many people may assume that randomized controlled trial experimental designs are the highest and best evaluation design (i.e. the "gold standard"). In reality, the right design for a given situation depends on the purpose of the evaluation and the available resources, just like all the other steps. In this tutorial, Tom Chapel describes the various evaluation design options, and the strengths and weaknesses of experimental and non-experimental design models. He emphasizes that thinking through the evaluation standards can help you choose the best evaluation design for this intervention at this time.
Duration: 15 minutes
Webinar 4
Gathering Data, Developing Conclusions, and Putting Your Findings to Use
Presented by Dr. Christina Christie, Claremont Graduate University
Christina Christie covers Steps 4, 5, and 6 in the CDC Framework (gathering evidence, justifying conclusions, and ensuring use). She describes the processes of gathering and using data for program bench-marking, improvement, and accountability. Drawing on engaging field examples, Christina discusses key concepts and approaches for using data as well as the importance of a comprehensive communications strategy to disseminate findings.
Duration: 57 minutes
Tutorial 4A
Focus On: Data Collection Choices
There are so many data collection methods available to you. Which ones are most likely to lead to the use of your findings? In "Focus On: Data Collection Choices," Tom Chapel discusses how to convert evaluation questions into measurable indicators and how those indicators help inform your data collection choices. Then, using a series of real-world examples, he illustrates how the purpose of the evaluation, combined with consideration of the evaluation standards, can help you make the best data collection choices for each evaluation.
Duration: 22 minutes
Tutorial 4B
Focus On: Using Mixed Methods
In some instances, using a single method of inquiry to answer your evaluation questions may result in incomplete or incorrect findings. The "mixed methods" approach, which is a combination of at least one qualitative and one quantitative data collection method, addresses this concern. In this webinar, Tom Chapel provides the rationale for such an approach and describes some of the choices and challenges evaluators face when using this now well-accepted evaluation methodology. The webinar includes several simple examples as well as a discussion of the importance of looking to the evaluation standards for guidance when choosing among data collection options.
Duration: 22 minutes
Webinar 5
Evaluation Purpose Informs Evaluation Design
Presented by Tom Chapel, MA, MBA, CDC Chief Evaluation Officer
Tom Chapel demonstrates how defining an evaluation's purpose, user, and use helps frame and guide an evaluator's choices throughout the evaluation. Using the CDC Framework for Evaluation, he shows how an evaluation designed to support program replication varies from one intended to guide program improvement, which, in turn, varies considerably from an evaluation designed to support decisions about a program's funding. The webinar compares and contrasts three scenarios, showing a wide range of the choices faced in any evaluation.
Duration: 50 minutes