Skip Navigation
Menu

Selecting Evaluation Methods

Now that you've selected an educational or outreach program to evaluate, and determined the objectives and anticipated outcomes, you must determine how you will conduct the evaluation. What type of data do you wish to collect? What methods and techniques will you use to gather the information you wish? The range of possibilities includes paper mail surveys, on-line surveys, and phone or face-to-face interviews. The following resources provide information on selecting evaluation methods.

Evaluation of Extension Programs
This how to guide by Richard L. Poling, University of Arkansas, reviews impact data types and collection tools. 2005.

Measuring Impact of Educational Programs
This fact sheet by Keith Diem, Rutgers University, offers a step by step tutorial on planning and developing evaluations. 2002.

Evaluating Knowledge
This fact sheet by Nancy Ellen Kiernan, Penn State University, presents how to evaluate knowledge gain; a step by step guide. 2001.

Be Logical About Program Evaluation: Begin with Learning Assessment
This Journal of Extension article by Mary E. Arnold, Oregon State University, focuses on assessing the learning that takes place in an educational program. 2002.

Measuring Intentions and Assessing the Impact
This fact sheet by Nancy Ellen Kiernan, Penn State University, presents how to measure and report intent of change data from program participants. 2006.

Evaluating Behavior
This fact sheet by Nancy Ellen Kiernan, Penn State University, discusses how to evaluate behavior; a step by step guide. 2001.

A Phone Interview: Steps to Increase Response From Your Target Audience
This fact sheet by Nancy Ellen Kiernan, Penn State University, focuses on increasing success of phone interviews for data gathering, including program evaluation. 2002.

Measuring Factors that Influence Behavior and Assessing the Impact
This fact sheet by Nancy Ellen Kiernan, Penn State University, offers strategies to collect and report change in behavior data. 2006.

How to Think About Evaluating a Webinar
This fact sheet by Nancy Ellen Kiernan, Penn State University, discusses how to measure the delivery method, audience participation, impact on knowledge, and intentions from a webinar. 2009.

Measuring Extension's Influence When Working With Multiple Agencies
This fact sheet by Nancy Ellen Kiernan, Penn State University, discusses how to isolate Extension's impact when working with other partners. Good advice on how to measure impact of various agencies joint efforts 2011.

Assessing Program Impact with the Critical Incident Technique
This Journal of Extension article by Barbara O'Neill, Rutgers University, discusses critical incident technique (CIT), a qualitative research method where subjects are encouraged to tell personal stories that provide descriptive data regarding their experiences on a particular topic. Provides an overview of the technique and illustrates its use through an example. 2013.

Using Non-Participant Observers to Assess Program Impact (PDF)
This fact sheet by Roger A. Rennekamp, University of Kentucky, offers information on how to use non-participant observers to gauge program impact on program participants. A description of the technique with useful examples for implementation are included.

A Comparison of Web and Mail Survey Response Rates
This Public Opinion Quarterly journal article by Michael D. Kaplowitz, Timothy D. Hadlock and Ralph Levine, Michigan State University, examines the effect of surface mail contacts on Web survey response rates, and the relative merit of using a mail survey in a population that has ready access to the Web. The findings suggest that in a population in which each member has Web access, a Web survey application can achieve a comparable response rate to a questionnaire delivered by surface mail if the Web version is preceded by a surface mail notification. 2004.

Increasing Response Rates to Web-Based Surveys (PDF)
This Journal of Extension article by Martha C. Monroe and Damian C. Adams, University of Florida, shares several lessons learned and recommendations for increasing response rates with Web-based surveys and draw attention to the importance of personalized and repeated contact for improving survey response rates. 2012.

Volunteer Interviewers in a Phone Interview: What to Consider
This fact sheet by Nancy Ellen Kiernan, Penn State University, instructs how to select and train volunteers to conduct phone interviews of program participants. 2002.

Revisiting the Retrospective Pretest
This American Journal of Evaluation article by L. G. Hill and D. L. Betz, Washington State University, discusses and presents research results of using traditional and retrospective pretests. They recommend traditional pretests for examination of program effects and retrospective pretests for examination of subjective experiences of program-related change. 2005.

Reflective Appraisal of Programs (RAP): An Approach to Studying Clientele-Perceived Results of Cooperative Extension Programs RATIONALE (PDF)
This guide series by Claude F. Bennett, United States Department of Agriculture, focuses primarily on the strengths of one approach—Reflective Appraisal of Programs (RAP)—for determining and appraising (evaluating) results of extension programs in counties. One in a series of documents from the early 1980s that provides valuable and practical information on using this methodology. 1982.

An Approach to Studying Clientele-Perceived Results of Cooperative Extension Programs INTRODUCTION
This guide series by Claude F. Bennett, United States Department of Agriculture, provides an overview of Reflective Appraisal of Programs (RAP) method to evaluate Extension programs. One in a series of documents from the early 1980s that provides valuable and practical information on using this methodology. 1982.

Reflective Appraisal of Programs (RAP): An Approach to Studying Clientele-Perceived Results of Cooperative Extension Programs GUIDE (PDF)
This guide series by Claude F. Bennett, United States Department of Agriculture, provides both the background concepts and step-by-step instructions extension agents need in order to determine program results. One in a series of documents from the early 1980s that provides valuable and practical information on using this methodology. 1982.

Reflective Appraisal of Programs (RAP): An Approach to Studying Clientele-Perceived Results of Cooperative Extension Programs WORKBOOK (PDF)
This guide series by Claude F. Bennett, United States Department of Agriculture, includes a blank workbook that allows step by step planning and implementation of a Reflective Appraisal of Programs. One in a series of documents from the early 1980s that provides valuable and practical information on using this methodology. 1982.

The 2010 User-Friendly Handbook for Project Evaluation (PDF)
This how to guide by Joy Frechtling, Melvin M. Mark, Debra J. Rog, Veronica Thomas, Henry Frierson, Stafford Hood, Gerunda Hughes and Elmima Johnson, is a comprehensive guide. Developed for the Division of Research and Learning in Formal and Informal Settings, National Science Foundation. This document is aimed at people who need to learn more about both the value of evaluation and how to design and carry out an evaluation. It builds on firmly established principles, blending technical knowledge and common sense approaches to quantitative and qualitative evaluation methods. 2012.

A Technique to Measure Opinions, Skills, Intentions, and Behaviors That's Different-Even Fun
This Journal of Extension article by Nancy Ellen Kiernan, Penn State University and Gwen Brock, describes construction of the scale and analysis of data, and provides examples from an actual program. 2000.

Smith Lever 3(d) Extension Evaluation and Outcome Reporting--A Scorecard to Assist Federal Program Leaders
This Journal of Extension article by Bill Hoffman, George Washington University and Barbara Grabowski, Penn State University, discusses the Hoffman EEOR Scorecard that was developed to help federal Smith Lever 3(d) program leaders with this problem by blending the LOGIC Evaluation Model with the utilization of Extension evaluation and outcome reporting (EEOR) ideal practices. 2004.

Collect Data
This on-line training guide by The Pell Institute for the Study of Opportunity in Higher Education, the Institute for Higher Education Policy, and Pathways to College Network, focuses on what should be considered when determining how to collect data, several data collection methods and tools, and how to best select a sample and increase participation rates. Part of a on-line training series titled "Evaluation Toolkit"; see Training Materials section for details.