SURV627: Experimental Design For Surveys

Data Analysis, Data Generating Process

Apply through UMD

Instructor: Ashley Amaya

A key tool of methodological research is the split-ballot experiment, in which randomly selected subgroups of a sample receive different questions, different response formats, or different modes of data collection. In theory, such experiments can combine the clarity of experimental designs with the inferential power of representative samples. All too often, though, such experiments use flawed designs that leave serious doubts about the meaning or generalizability of the findings. The purpose of this course is to consider the issues involved in the design and analysis of data from experiments embedded in surveys. It covers the purposes of experiments in surveys, examines several classic survey experiments in detail, and takes a close look at some of the pitfalls and issues in the design of such studies. These pitfalls include problems (such as the confounding of the experimental variables) that jeopardize the comparability of the experimental groups, problems (such as nonresponse) that cast doubts on the generality of the results, and problems in determining the reliability of the results. The course will also consider some of the design decisions that almost always arise in planning experiments — issues such as identifying the appropriate error term for significance tests and including necessary comparison groups.

Course objectives: 

By the end of the course, students will…

  • Learn about basic principles of experimental design
  • Recognize the main types of experimental designs
  • Improve the quality of designs used to carry out methodological research in or for surveys
  • Develop critical skills to spot flaws in experimental and nonexperimental designs to support causal inferences
  • Improve skills at analyzing results of survey experiments
  • Improve skills as both consumer and producer of experiments done to shed light on survey methodological issues

Grading will be based on:

Three online quizzes (45%)
Three exercises (45%)
Participation in online discussions (10%)

Dates of when assignment will be due are indicated in the syllabus. Extensions will be granted sparingly and are at the instructor's discretion.


At least one prior course in data analysis. Ability to use SAS or STATA


Dillman, D., Sinclair, M.D., & Clark, J.R. (1993). Effects of questionnaire length, respondent-friendly design, and a difficult question on response rates for occupant-addressed census mail surveys. Public Opinion Quarterly, 57, 289-304.

Fienberg, S. E., & Tanur, J. M. (1988). From the inside out and the outside in: Combining experimental and sampling structures. Canadian Journal of Statistics, 16, 135-151.

Groves, R. M., Fowler, F. J., Couper, M.P., Lepkowski, J. M., Singer, E., & Tourangeau, R. (2009). Survey methodology, 2nd Edition. Pages 259-287. Hoboken, NJ: John Wiley.

Heckman, J. J. (1979). Sample selection bias as a specification error. Econometrica, 47, 153-161. Neter, J., and Waksberg, J. (1964). A study of response errors in expenditures data from household

interviews. Journal of the American Statistical Association, 59, 17-55.

O'Reilly, J., Hubbard, M., Lessler, J., Biemer, P., and Turner, C. (1994). Audio and video computer assisted self-interviewing: Preliminary tests of new technology for data collection. Journal of Official Statistics, 10, 197-214.

Rubin, D. B. (1986) Statistics and causal inference: Comment: Which ifs have causal answers. Journal of the American Statistical Association, 81, 961-962.

Rubin, D.B. (1997). Estimating causal effects from large data sets using propensity scores. Annals of Internal Medicine, 127: 757-763.

Stuart, E. A., & Rubin, D. B (2008). Best practices in quasiexperimental designs: Matching methods for causal inference. In J. Osborne (Ed.), Best practices in quantitative methods (pp. 155-176). Thousand Oaks, CA: Sage Publications.

Shadish, W.R. (2010). Campbell and Rubin: A primer and comparison of their approaches to causal inference in field settings. Psychological Methods, 15, 3-17.

Shadish, W. R., Cook, T.D., & Campbell, D. T. (2002). Experimental & quasi-experimental designs for generalized causal inference. Chapters 1-3.

Simmons, J. P, Nelson, L. D., & Simonsohn, U. (2011). False-positive psychology: Undisclosed llexibility in data collection and analysis allows presenting anything as significant. Psychological Science, 22, 1359-66.

Tourangeau, R. (2004). Design considerations for questionnaire development. In S. Presser, J. Rothgeb, M. Couper, J. Lessler, E. Martin, J. Martin, and E. Singer (Eds.), Methods for Testing and Evaluating Survey Questionnaires (pp. 209-224). New York: John Wiley & Sons.

Tourangeau, R., Kreuter, F., & Eckman, S. (2012). Motivated underreporting in screening interviews. Public Opinion Quarterly, 76, 453-469.

Tourangeau, R., Smith, T.W., and Rasinski, K. (1997). Motivation to report sensitive behaviors in surveys: Evidence from a bogus pipeline experiment. Journal of Applied Social Psychology, 27, 209-222.

Van den Brakel, J., and Rennsen, R. H. (2005). Analysis of experiments embedded in complex sampling designs. Survey Methodology, 31, 23-40.

Van den Brakel, J. (2008). Design-based analysis of embedded experiments with applications in the Dutch Labour Force Survey. Journal of the Royal Statistical Society, Series A, 171, 581–613.

Weekly online meetings & assignments:

  • Week 1: Introduction 
  • Week 2: Examples of Experiments in Surveys (Quiz 1)
  • Week 3: Experimental Designs I (Quiz 2, Exercise 1)
  • Week 4: Experimental Designs II 
  • Week 5: Comparability and Generalizability (Exercise 2)
  • Week 6: Construct Validity I
  • Week 7: Construct Validity 2; Statistical Validity 
  • Week 8: Wrap-Up (Quiz 3, Exercise 3)

Course Dates


Fall Semester (September – December)


Fall Semester (September – December)