Skip Navigation
Menu

Cooperative Extension Fact Sheet FS995

A Step-By-Step Guide to Developing Effective Questionnaires and Survey Procedures for Program Evaluation & Research

  • Keith Diem, Program Leader in Educational Design

Surveys can be an effective means to collect data needed for research and evaluation. However, the method is often misused and abused. The challenge is to design a survey that accomplishes its purpose and avoids the following common errors:

  • Sampling Error (How representative is the group being surveyed?)
  • Frame Error (How accurate is the list from which respondents are drawn?)
  • Selection Error (Does everyone have an equal chance of being selected to respond?)
  • Measurement Error (Is the questionnaire valid and reliable?)
  • Non-response Error (How is the generalizability of findings jeopardized because of subjects who did not reply?)

This fact sheet provides guidance for constructing questionnaires and developing procedures to administer them so they achieve valid and reliable results. This is not difficult if a logical process is followed.

1. Determine the purpose

Questionnaires are typically used for survey research, to determine the current status or "situation." They are also used to measure the difference in status "before" and "after" to determine changes that may be attributed to an educational program. Before creating a questionnaire, start by asking yourself a few important questions:

  • What do I need to know?
  • Why do I need to know it?
  • What will happen as a result of this questionnaire?
  • Can I get the information from existing sources instead of conducting a survey?

It's a good idea to start with research questions or objectives. Here are some examples:

Research Questions:

  • What is the household income of program participants in the money management course?
  • How many farmers use IPM in the county?
  • What public speaking skills does a typical 4-H member possess?

Research Objectives:

  • To determine the degree of recycling used in households of children enrolled in a summer youth environmental program.
  • To determine the average number of acres planted with no-till methods by dairy farmers attending the Extension course.
  • To determine the average cost per meal served in the household of EFNEP participants.

2. Decide what you are measuring

As with determining the purpose, this should be based on the objectives of your educational program and the evaluation of its outcomes and impact. Consider which of the following you are aiming to measure:

  • Attitude
  • Knowledge
  • Skills
  • Goals, intentions, aspirations
  • Behaviors and practices
  • Perceptions of knowledge, skills, or behavior

Of course, it's possible that you might measure more than one . But the questions will be clearly different based on the information you are trying to gather. Refer to the RCE fact sheet FS869, "Measuring Impact of Educational Programs," to learn more about the types of outcomes that can be measured.

3. Who should be asked?

  • What is the appropriate population (group of people/subjects) to be studied or questioned?
    • Should a census or sampling be used?
      • A population is the complete set of subjects that can be studied: people, objects, animals, plants, etc.
      • A sample is a subset of subjects that can be studied to make the evaluation/research project more manageable.
      • If a large enough random sample is taken, the results can be statistically similar to taking a census of an entire population - with reduced effort and cost.
    • The population sampled from as well as sampling method used affects to whom research findings can be generalized. In other words, for whom do the results apply? This is an indication of the external validity of a study.
    • There are a variety of ways samples can be taken:
      • Simple random (pull names from a hat)
      • Systematic random (i.e., every 5th name)
      • Stratified random (separate samples for each subgroup)
      • Cluster sampling (treating intact groups that cannot be broken up, such as classrooms, as subjects to be sampled)
  • Whatever population is studied or the sampling method used, a high percentage of respondents is critical to ensure the respondents are truly representative of the population being studied. Nonresponse error affects the validity of the study, and a plan for dealing with it, should be determined in advance. See the RCE fact sheet FS997, "Maximizing Response Rate and Controlling Nonresponse Error in Survey Research," for more information about this.
  • Other possible errors can be avoided with simple procedures: "sampling error" is reduced by using a large, random sample or conducting a census; "frame error" is minimized by making sure the list of potential subjects is current and accurate; and "selection error" is avoided by eliminating duplication from these lists.

4. Consider the audience

  • Age
  • Education level
  • Familiarity with tests & questionnaires
  • Cultural bias/language barrier

To ensure that the survey instrument you develop is appropriate for your audience, "field test" your questionnaire with other people similar to your respondents before administering the final version. This will allow you to improve unclear questions or procedures and detect errors beforehand. Following recommendations in this guide pertaining to questionnaire design and wording of questions will reduce systematic "measurement" error, which will improve the internal validity of your study.

5. Choose an appropriate data collection method.

  • Mailed
  • Telephone
  • Personal (face-to-face) interview
  • Web-based

See the RCE fact sheet FS996, "Choosing a Data Collection Method for Survey Research," for more information about the advantages and disadvantages of each method.

6. Choose a collection procedure: anonymous vs. confidential

Confidential

  • Name or other identifiers are used to follow-up with nonrespondents or match data from pre-test/posttests.
  • Individual data are NOT shared with anyone! Information is not used for any other purpose.
  • Confidentiality must never be breached! This pledge is crucial in attaining honest, complete answers from respondents.
  • Identifying information is destroyed after survey is complete.

Anonymous

  • Name is not asked of respondents
  • Because no other identifying codes are used, the researcher is unable to follow-up with nonrespondents or match data from pre-test/posttests. This may not be a problem when doing random interviews (such as exit surveys).
  • Collecting basic descriptive information about respondents is still useful for comparing respondents with the population.
  • One possible way to maintain anonymity while also keeping track of nonrespondents is to send a separate post card with the questionnaire. The respondent can return it separately, enabling him or her to declare that "John/Mary Doe has returned the questionnaire."

7. Choose measurement scale and scoring

Use scales that provide the information needed and are appropriate for respondents. Some choices are:

  • Fixed-response:
    • Yes-No
    • True-False
    • Multiple Choice
    • Rating Scale/Continuum (such as a Likert-type scale)
    • Agree-Disagree
    • Rank ordering
  • Open-ended (narrative response)

8. Title the questionnaire

  • This will let the respondent know what it's about.
  • Include a brief purpose of the study (one sentence or phrase).
  • Consider including a simple graphic that depicts the purpose of the evaluation or program.

9. Start with non-threatening questions

  • This will make sure the respondent is not intimidated.
  • Make the first questions relevant to the title/purpose and easy to answer.

10. Include simple instructions

  • How to complete each section
  • How to mark answers (pen/pencil, circle, check, etc.)

11. Use plain language

  • Be direct
  • Use simplest language necessary
  • Avoid jargon and acronyms
  • Include definitions if needed

12. Be brief

  • Keep the questionnaire as short as possible (without jeopardizing reliability)
  • Focus on "need to know" questions and minimize "nice to know" information

13. Put most important questions up front

  • Respondents may get fatigued or hurried by later questions.
  • Include questions about demographic information at the end so questionnaire is focused on topic at hand.

14. Make sure questions match the measurement scale selected, and answer categories are precise

  • Make sure answer choices correspond to the questions, both in content and grammar.
  • Be consistent in arranging the answers. While it is conventional to read English from left to right, and go from "low" to "high," the most important rule is to explain the "rule" being used with clear instructions and to apply the rule consistently throughout the questionnaire.
  • Use exact numbers when possible (instead of Frequently, Rarely)
  • Define time frames if necessary. Instead of "recently," ask "last month" or "during August 2001."
  • Make sure answer categories do not overlap.
  • If you are using a continuum scale with numbers to represent concepts, make sure to "anchor" at least the top and bottom of the scale with terms that describe meanings of the numbers. (For example, 1 = Low, 10 = High).
  • Balance the "negative" or "low" answer choices (both in number and degree) with "positive" or "high" choices on the scale. For example, don't give only positive answer choices or five degrees of "positive" (i.e. great, excellent, super, fantastic, awesome) and only one, extreme "negative" response choice (i.e., terrible).
  • An even number of answer choices doesn't give the respondent an easy, "middle" choice. If you want to offer a "neutral" or "no opinion" choice, then do it by design, not by accident.
  • Determine in advance how questions will be scored, what to do with missing data, incomplete or unclear responses, etc.

15. Ask only one question at a time

  • Avoid "double-barreled" questions that confuse the respondent into not knowing how to answer.
    • Consider the confusion created by these examples:
      • Do you like cats and dogs?
      • Do you watch television or would you rather read?
      • Do you like tennis or do you like golfing?
      • Have you stopped beating your child?

16. Avoid "loaded" questions

  • Minimize "bias" in questions by using experts and field testing
  • Examples of "loaded" or biased questions:
    • Do you treat your children with kindness like a good parent should?
    • Are you as interested in sports as most other redblooded American men?
    • Do you agree that our program deserves more funding than the greedy politicians currently provide?

17. Arrange in a logical order

  • Group similar questions together (such as by topic or scoring method)
  • Use an outline approach … number each question

18. Minimize open-ended questions

  • "Essay" questions make the survey look more like a test at school and gives impression the form is a lot of work. They're also difficult to score and summarize.
  • Field testing a questionnaire with open-ended questions can help identify common answer categories that could be made into fixed-response (multiple choice) questions.

19. Provide space to tell more

  • Give respondent room to comment about individual questions or the survey as a whole.
  • Ask for "Any additional comments or suggestions?"

20. Make sure it looks professional!

  • Pruuf reed!
  • Consider using a "booklet" format so it stands out from just "paper."
  • Use quality reproduction
  • Proof read!

21. Use a cover letter

  • Purpose of study and its usefulness
  • Sponsor of study
  • Why response is important
  • Promise of confidentiality and explanation of identification (code number on questionnaire)
  • Deadline for returning survey
  • Informed Consent. This lets the respondents know they are participating in a study, plans for use of the information, how their information will be treated (confidentially), etc. "Passive" consent is assumed if the respondent completes and returns the questionnaire. "Active" consent requires a participant to give formal approval (i.e., by signing a form declaring that the respondent explicitly agrees to participate or allow his or her minor child to take part). A good cover letter includes all of the information needed to gain "informed consent," which is required for most research involving human subjects.
  • What to do if questions arise
  • Thank you
  • Original signature of researcher or dignitary

22. What to do when questionnaire is complete?

  • Include a reasonable deadline
  • Indicate where to send completed survey.
  • Recommend using stamped, pre-addressed return envelope

23. Thank respondents!

  • On questionnaire
  • In cover letter
  • Possible follow-up, with "reward"

24. Check reliability

Reliability is a measure of how consistent the results of using a measurement instrument (e.g. a test, questionnaire) will be. Reducing "random" error in questionnaires by removing "quirky" questions or changing their arrangement improves reliability. Various methods are available to measure the reliability of an instrument, based on the type of instrument and its purpose. Major statistics software packages provide support for reliability. The method used depends on the type of questionnaire used. Reliability is often first determined using a "pilot test" with the proposed questionnaire and might also be repeated with the final version. When using an existing, commercially available instrument, reliability measures are generally reported for the audience for which it was intended. Examples of reliability measures are:

  • Test-Retest (determines repeatability)
  • Internal Consistency Measures (such as Cronbach's Alpha reliability coefficient or Split-half analysis) determine how well items contained in the questionnaire measure the "same thing."

25. Determine the need to obtain Institutional Review Board (IRB) approval before administering a questionnaire to human subjects.

  • See the RCE fact sheet FS998, "Compliance with Regulations for Protection of Human Research Subjects in Program Evaluation and Research: A Guide for Rutgers Cooperative Extension Faculty and Staff," for details. Here is a summary:
    • Do not need to apply to Rutgers Institutional Review Board (IRB) if primary goal of the activity is program evaluation and/or improvement, and findings are intended for internal use or sharing in popular media for public relations purposes. Systematic, scientific research methods can be used to perform program evaluation, even if children are subjects.
    • Must apply to IRB if the primary intent of the evaluation is for publishing in scholarly journals, for scientific grants, or giving scholarly presentations. If determining program impact, value, and success is primary and scholarly publishing later becomes a secondary outcome, IRB approval can then be sought to use "existing data" from the program evaluation for publishing.
    • Research with human subjects that does not pose more than minimal risk to participants must be approved in advance by the IRB, even if the study would be considered "exempt" from IRB "full" or "expedited" review. A research project that poses more than minimal risk to human subjects will likely require IRB "full" or "expedited" review before receiving approval.

26. Conduct evaluation as planned

  • Administer questionnaire
  • Analyze data
  • Determine findings
  • Report results

References

February 2002