Research highlights of the Consumer Expenditure Survey redesign
Although the CE survey has not been fully redesigned since the late 1970s, the CE program has a longstanding history of conducting research that addresses challenges faced by the survey. From experimenting with recall periods in the early 1970s6 to large scale field testing in the 1990s to developing a user-friendly diary,7 the CE Survey has sought to base key design decisions on research. A key component of the Gemini Project’s redesign objectives has been evaluation of the features of various designs. This section of the article identifies some of the design decisions that have been researched and discusses the findings and the implications of the work, such as their impact on respondent burden or cost savings. The appendix provides a summary of the design options and associated findings that are described below.
Reduce the number of interviews. One option investigated for its potential to reduce measurement error and respondent burden in the CEQ is to reduce the number of interviews from the five interviews currently conducted. One concern with the present design is that it may result in panel conditioning, whereby survey respondents’ participation in multiple interviews changes their actual behaviors or those they report. Research into the presence of panel conditioning in the CEQ has suggested that although conditioning may be present in some categories, overall there is limited evidence that panel conditioning is a source of measurement error in later rounds of interviewing. Jennifer Shields and Nhien To examined expenditures in the trips and vacations section of the CEQ for respondents participating in all five interviews between April 2001 and March 2002.8 The researchers found evidence of curtailed reporting across interviews, thereby suggesting the presence of panel conditioning within this expenditure category. Ting Yan and Kennon Copeland studied changes in mean expenditure amounts and the number of reported expenditures across all interview sections for interviews conducted during the April–June 2008 period.9 Comparing the expenditure reports of respondents completing later interviews against reports from earlier interviews, they did not find a statistically significant decrease in either the amount or the number of expenditures. An examination of reported expenditures by the size of expenditures and by different respondent subgroups also did not reveal a decline in reports for respondents who completed later interviews. Therefore, while respondents may be burdened by participating in five separate interviews, conducting fewer than five interviews with each household is not seen as a redesign option that would reduce measurement error.
Although there is only limited evidence that reducing the number of survey waves will improve data quality, a reduction in the number of interviews conducted would be expected to cut costs. The CEQ currently uses the first interview wave to collect both roster information and an inventory of housing characteristics and goods as well as to serve as a bounding interview to prevent telescoping (for example, respondents reporting purchases in the second interview that had occurred prior to that interview’s reference period). Studies comparing reporting levels from households that received a bounding interview with those that did not found no statistically significant differences between the two types of households,10 thus limiting the utility of the first interview as a bounding interview. Ian Elkin looked at CEQ timing and data collection costs to calculate the effect of transferring roster and inventory questions from wave one into wave two. He found that while shortening the wave-one bounding interview resulted in only marginal cost savings, much larger savings were associated with eliminating the bounding interview entirely. He also reported that the addition of roster and inventory questions to wave two could be accomplished with a manageable, incremental increase in interview length.11 This, combined with evidence that the National Crime Victimization Survey did not report any adverse effects on data quality after adjustment for its elimination of the bounding interview, has led to the recommendation to eliminate the wave-one CEQ interview.
6 Seymour Sudman and Robert Ferber, “Some experimentation with recall procedures and diaries for consumer expenditures,” paper presented at the annual meeting of the American Statistical Association, August 23–26, 1971, http://www.bls.gov/cex/cesrvmeth_ferber.pdf.
7 Eric Figueroa, Jeanette Davis, Sally Reyes-Morales, Nhien To, and Lucilla Tan, “Is a user-friendly diary more effective? Findings from a field test,” BLS Statistical and Survey Methods Research Papers (U.S. Bureau of Labor Statistics, 2003), http://www.bls.gov/osmr/pdf/st030050.pdf.
8 Jennifer Shields and Nhien To, “Learning to say no: conditioned underreporting in an expenditure survey,” paper presented at the American Association for Public Opinion Research annual conference, May 12–15, 2005, http://www.amstat.org/sections/srms/proceedings/y2005/Files/JSM2005-000432.pdf.
9 Ting Yan and Kennon Copeland, “Panel conditioning in the Consumer Expenditure Quarterly Interview Survey,” paper presented at the annual meeting of the American Statistical Association, July 31–August 5, 2010, http://www.amstat.org/sections/srms/proceedings/y2010/Files/307812_59394.pdf.
10 Catherine Hackett, “Consumer Expenditure Quarterly Interview Survey: the effectiveness of the bounding interview,” Bounding interview project unpublished paper (U.S. Bureau of Labor Statistics, 2011).
11 Ian Elkin, “Recommendation regarding the use of a CE bounding interview,” Bounding interview project unpublished paper (U.S. Bureau of Labor Statistics, 2012).