Department of Labor Logo United States Department of Labor
Dot gov

The .gov means it's official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you're on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Consumer Expenditure Surveys

The CE Midyear Data Quality Profile - 2022

Authors: Grayson Armstrong, Gray Jones, Tucker Miller, and Sharon Pham

This paper was published as part of the Consumer Expenditure Surveys Program Report Series.


Table of Contents

Overview
Highlights
1. Final disposition rates of eligible sample units (Diary and Interview Surveys)
2. Records Use (Interview Survey)
3. Information Booklet use (Diary and Interview Surveys)
4. Expenditure edit rates (Diary and Interview Surveys)
5. Income imputation rates (Diary and Interview Surveys)
6. Respondent burden (Interview Survey)
7. Survey mode (Diary and Interview Surveys)
8. Survey Response Time (Diary and Interview Surveys)
Summary
References



Overview

The Bureau of Labor Statistics (BLS) is committed to producing data that are of consistently high quality (i.e., accurate, objective, relevant, timely, and accessible) in accordance with Statistical Policy Directive No. 1.[1] This Directive, issued by the Office of Management and Budget, affirms the fundamental responsibilities of Federal Statistical Agencies, and recognized statistical units in the design, collection, processing, editing, compilation, storage, analysis, release, and dissemination of statistical information. The BLS Consumer Expenditure Surveys (CE) program provides data users with a variety of resources to assist them in analyzing overall CE data quality. CE data users can evaluate quality on their own by utilizing the following:

In addition, the Data Quality Profile (DQP) provides a comprehensive set of quality metrics that are timely, routinely updated, and accessible to users. For data users, DQP metrics are an indication of quality for both the Interview Survey and the Diary Survey. For internal stakeholders, these metrics signal areas for improvements to the surveys.

This DQP includes, for each metric, a brief description of each metric, along with the results, which are tabulated and graphed. The DQP Reference Guide (Armstrong, Jones, Miller & Pham 2023) gives detailed descriptions of the metrics, computations, and methodology.

Prior DQPs are available on the CE Library Page. BLS began publishing DQPs every year beginning with the 2017 data, though prototype DQPs are available for 2013 and 2015. Midyear DQPs started with the 2020 midyear data release.

The data quality metrics are reported in quarterly format, where the quarter is the three-month period in which the survey data were collected. Because Interview Survey respondents are asked to recall expenditures from the prior three months, the data collected in 2022q1 includes expenditures made in 2021q4. For example, an interview conducted in February 2022 would include expenditures from November 2021, December 2021, and January 2022. In contrast, respondents to the Diary Survey report expenditures on the days they were transacted. This is the reason why the Interview Survey metrics appear to be "ahead" of the Diary Survey by a quarter (e.g., 2022q3 for the Interview Survey and 2022q2 for the Diary Survey).


Highlights

In this section, we highlight some of the metric trends of note from the past three years. This time frame covers the third quarter of 2019 to the second quarter of 2022 for the CE Diary survey, and the fourth quarter of 2019 to the third quarter of 2022 for the CE Interview survey. Subsequent sections describe the individual metrics with detailed data tables.

Recent Trends of Note

  • In the Interview Survey, final disposition rates fell a total of 10.8 percentage points over the three years covered in this report, from 51.6 percent in 2019q4 to 40.8 percent in 2022q3. This can be partially attributed to the 5.4 percentage point drop between 2022q2 and 2022q3, which coincided with the implementation of cost saving measures at Census. As a result of these measures, FRs did not revisit previous Type A interviews, leading to higher "other" and "non-contact" case outcomes.

  • Median total time for the Interview Survey increased across all waves over between 2019q4 and 2022q3. This increase was likely due to the implementation of Computer Audio-Recorded Interviewing (CARI) in 2022q3. This outcome was expected following the CARI pretest, where the CARI consent question was given to wave 4 participants in 2021q4, and metric data showed that wave 4 median time rose from 60 minutes in 2021q3 to 69.5 minutes in 2021q4.

  • Respondent burden in the Interview Survey has generally increased, which is illustrated by the fall in the rate of respondents reporting no burden from 32.9 percent in 2019q4 to 27.1 percent in 2022q3. Despite the sharp rise in median total interview time in 2022q3, there was not a commensurate rise in respondents perceived Interview Survey burden. The rate of respondents who reported no burden dropped from 28.4 percent in 2022q2 to 27.1 percent in 2022q3, but this level of variation is common from quarter to quarter.

  • Rates of information booklet use have continued to rise in both CE Surveys since the initial COVID-19 pandemic related drop to near zero use.

  • In the CE Diary Survey, a majority of respondents in 2022q2 (69.6 percent) provided most of their household demographic information to the Field Representative during an in-person visit. This continued the upward trend from the series low point of 0.9 percent in 2020q2.


1. Final disposition rates of eligible sample units (Diary and Interview Surveys)

Final disposition rates of eligible sample units report the final participation outcomes of field staff's survey recruitment efforts. The BLS classifies the final outcome of eligible sample units into the following four main categories:

  1. Completed interview
  2. Nonresponse due to refusal
  3. Nonresponse due to noncontact
  4. Nonresponse due to other reasons

Completed interviews reclassified to a nonresponse by BLS staff are included within the other nonresponse category and are presented in the nonresponse reclassification tables (Tables 1.2 and 1.4). More information on the nonresponse reclassification edit, along with information on how BLS staff calculate response rates can be found in the DQP Reference Guide (Armstrong, Jones, Miller, and Pham, 2023).

The key point of interest regarding response rates is that low response rates can indicate the potential for nonresponse bias of an expenditure estimate if the cause of nonresponse is correlated with that expenditure category. While recently published research on nonresponse bias has not shown statistically significant bias in the CE survey estimates during the COVID-19 pandemic (Ash, Nix, and Steinberg, 2022), BLS continues to monitor this risk.

In addition, higher response rates are preferred for more precise estimates. We present unweighted response rates in this report because unweighted rates measure the effectiveness of our data collection efforts. When we previously calculated weighted response rates, they showed no meaningful difference from the unweighted rates.

Diary Survey Summary

  • In March 2020, the Census Bureau temporarily suspended in-person diary placement interviews due to the COVID-19 pandemic. This caused response rates to drop to 26.1 percent in 2020q2 (Table 1.1).
  • Since 2020q2, response rates have partially recovered, rising to 42.9 percent in 2022q2 (Table 1.1). Overall, response rates declined 11.8 percentage points from 54.7 in 2019q3 to 42.9 in 2022q2 (Table 1.1).
  • Refusal rates contributed most to the decline in response rates with an increase of 11.1 percentage points from 25.8 to 36.9 percent (Table 1.1).
  • In 2021q4, the refusal rate exceeded the response rate for the first time in series history, by 1.7 percentage points.
  • Noncontact rates rose 3.7 percentage points from 6.1 in 2019q3 to 9.8 in 2022q2 (Table 1.1).
  • Other nonresponse rates declined by 3.0 percentage points overall from 2019q3 to 2022q2 but rose outside of the normal range to 26.3 percent in 2020q1, before jumping to a historical high of 59.1 percent in 2020q2 due to a temporary reclassification of interviews of unknown eligibility during the COVID-19 pandemic. This rate has fell since 2020q3, reaching a record low of 10.3 percent in 2022q1 (Table 1.1).
Table 1.1 CE Diary Survey: distribution of final dispositions for eligible sample units (unweighted)
Quarter Number of eligible sample units Interview Refusal Noncontact Other Nonresponse

2019Q3

5,020 54.7 25.8 6.1 13.4

2019Q4

5,216 48.9 29.9 7.6 13.5

2020Q1

7,474 44.0 22.5 7.3 26.3

2020Q2

7,409 26.1 12.1 2.7 59.1

2020Q3

7,784 32.9 22.2 7.2 37.7

2020Q4

7,774 36.5 34.7 10.1 18.8

2021Q1

7,488 39.4 34.4 7.6 18.6

2021Q2

7,584 42.5 34.9 8.8 13.8

2021Q3

7,456 40.7 37.0 11.1 11.2

2021Q4

7,676 37.3 39.0 11.9 11.8

2022Q1

7,645 43.9 36.3 9.5 10.3

2022Q2

7,556 42.9 36.9 9.8 10.4

Table 1.2 Diary Survey: prevalence of nonresponse reclassifications
Quarter Number of eligible sample units Total reclassifications COVID-19 reclassifications Other reclassifications

2019Q3

5,020 229 0 229

2019Q4

5,216 188 0 188

2020Q1

7,474[3] 855 562 293

2020Q2

7,409 3393 3202 191

2020Q3

7,784 250 34 216

2020Q4

7,774 248 10 238

2021Q1

7,488 374 2 372

2021Q2

7,584 353 0 353

2021Q3

7,456 348 0 348

2021Q4

7,676 387 0 387

2022Q1

7,645 362 0 362

2022Q2

7,556 377 0 377

Interview Survey Summary

  • In March 2020, the Census Bureau temporarily suspended all in-person interviews due to the COVID-19 pandemic. Post-suspension response rates fell 6.3 percentage points between 2020q1 to 2020q2. Since then, response rates have continued to decline, reaching a series low of 40.8 percent in 2022q3 (Table 1.3).
  • All interview nonresponse classifications experienced an anomaly in 2022q3 due to the Census Bureau reducing contact attempts as a part of cost-cutting measures in place during August and September.
  • Refusal rates fell to 23.8 percent in 2022q3, a 19.7 point drop from 2022q2.
  • Other nonresponse rates rose to 19.0 percent in 2022q3, a rise of 17.0 percent from the previous quarter.
  • Noncontact rates rose by 8.1 percentage points from 2022q2 (8.3 percent) to 2022q3 (16.4 percent).
  • Outside of this anomaly in 2022q3, refusal rates generally rose over the past three years, increasing from 36.8 percent in 2019q4 to 43.5 percent in 2022q2.
  • In 2021q4, the refusal rate (44.3 percent) exceeded the response rate (43.5 percent) for the first time by 0.8 percentage points (Table 1.3).
  • Overall, other nonresponse rates declined from 5.5 in 2019q4 to 2.0 in 2022q2, but in that time there was noteworthy fluctuation, with the rate peaking at 37.9 percent in 2020q2. Similar to the Diary survey, this variation was driven by high instances of COVID-19 reclassifications at the onset of the pandemic (Table 1.3). These COVID-19 reclassifications eventually fell to zero as the BLS incorporated COVID-19 related nonresponses in the refusal category in 2021q2, and began treating them like other illness-related refusals.[4]
  • Prior to 2020q2, noncontact rates remained fairly steady but fell to near zero in 2020q2 (0.8 percent) due to a large increase in the number of nonresponse cases classified as other (Table 1.3). This is a consequence of BLS reclassification policy in response to the onset of the COVID-19 pandemic.
  • Noncontact rates rose back to 4.0 percent in 2020q3 and have continued to increase past the pre-pandemic norm since then, reaching 8.3 percent in 2022q2 (Table 1.3).
Table 1.3 Interview Survey: distribution of final dispositions for eligible sample units (unweighted)
Quarter Number of eligible sample units Interview Refusal Noncontact Other Nonresponse

2019Q4

10,170 51.6 36.8 6.1 5.5

2020Q1

9,956 52.2 33.8 4.7 9.3

2020Q2

10,581 45.9 15.4 0.8 37.9

2020Q3

11,190 44.5 24.2 4.0 27.4

2020Q4

11,185 46.5 36.8 6.3 10.4

2021Q1

11,125 46.0 38.9 6.8 8.3

2021Q2

11,120 46.7 41.1 9.5 2.7

2021Q3

11,117 46.1 43.0 8.4 2.5

2021Q4

11,275 43.5 44.3 9.9 2.3

2022Q1

11,320 45.8 42.8 9.3 2.1

2022Q2

11,202 46.2 43.5 8.3 2.0

2022Q3

11,235 40.8 23.8 16.4 19.0

Table 1.4 Interview Survey: prevalence of nonresponse reclassifications
Quarter Number of eligible sample units Total reclassifications COVID-19 reclassifications Other reclassifications

2019Q4

10,170 14 0 14

2020Q1

9,956 197 186 11

2020Q2

10,581 2955 2944 11

2020Q3

11,190 88 74 14

2020Q4

11,185 32 14 18

2021Q1

11,125 72 2 70

2021Q2

11,120 522 0 522

2021Q3

11,117 156 0 156

2021Q4

11,275 16 0 16

2022Q1

11,320 13 0 13

2022Q2

11,202 13 0 13

2022Q3

11,235 3 0 3

2. Records Use (Interview Survey)

The Records Use metric measures the proportion of respondents who refer to records while answering the Interview Survey questions, according to the interviewer. Examples of records include, but are not limited to: receipts, bills, checkbooks, and bank statements. Records use is retrospectively recorded by the interviewer at the end of the interview. Past research has shown that respondents who use expenditure records report more expenditures with lower rates of missing data (Abdirizak, Erhard, Lee, and McBride, 2017), so a higher prevalence of records use is desirable. Metrics in this section are presented by survey wave.[5]

Interview Survey Summary

  • Records usage temporarily rose in 2016 for Wave 1 respondents. This is likely a result of a field test conducted in that year that gave a subset of respondents monetary incentives (Elkin, McBride, and Steinberg, 2018) to use records (Table 2.1).
  • Until 2021q3, records use had been largely stable across interview waves. Since then, there has been a noticeable upward trend in records use in each wave of the interview survey (Table 2.1).
Table 2.1 Interview Survey: prevalence of records use among respondents
Quarter Wave Number of respondents Used Did not use Missing response

2019q4

Wave 1 1,318 53.0 46.2 0.8

2019q4

Waves 2 & 3 2,637 48.8 51.0 0.2

2019q4

Wave 4 1,293 53.1 46.3 0.5

2020q1

Wave 1 1,239 53.6 45.2 1.2

2020q1

Waves 2 & 3 2,601 50.7 48.9 0.4

2020q1

Wave 4 1,362 53.4 46.2 0.4

2020q2

Wave 1 965 51.9 47.3 0.8

2020q2

Waves 2 & 3 2,559 50.0 49.7 0.3

2020q2

Wave 4 1,334 52.4 47.1 0.5

2020q3

Wave 1 1,143 49.3 49.3 1.4

2020q3

Waves 2 & 3 2,444 49.4 50.3 0.3

2020q3

Wave 4 1,393 51.0 48.7 0.4

2020q4

Wave 1 1,230 50.1 49.6 0.3

2020q4

Waves 2 & 3 2,589 50.1 49.3 0.5

2020q4

Wave 4 1,386 51.9 47.8 0.2

2021q1

Wave 1 1,250 52.0 47.4 0.6

2021q1

Waves 2 & 3 2,515 50.3 49.4 0.4

2021q1

Wave 4 1,350 52.4 47.0 0.7

2021q2

Wave 1 1,325 49.8 49.6 0.6

2021q2

Waves 2 & 3 2,534 47.8 51.4 0.7

2021q2

Wave 4 1,337 50.5 48.9 0.6

2021q3

Wave 1 1,352 53.0 46.1 1.0

2021q3

Waves 2 & 3 2,488 48.6 50.6 0.8

2021q3

Wave 4 1,281 49.6 49.6 0.8

2021q4

Wave 1 1,229 54.8 44.4 0.8

2021q4

Waves 2 & 3 2,450 53.2 46.4 0.4

2021q4

Wave 4 1,223 54.0 45.3 0.7

2022q1

Wave 1 1,347 60.3 39.2 0.5

2022q1

Waves 2 & 3 2,551 53.9 45.7 0.4

2022q1

Wave 4 1,289 56.7 42.7 0.5

2022q2

Wave 1 1,325 55.4 43.5 1.1

2022q2

Waves 2 & 3 2,532 52.6 46.6 0.8

2022q2

Wave 4 1,320 54.4 45.1 0.5

2022q3

Wave 1 1,277 57.6 40.3 2.1

2022q3

Waves 2 & 3 2,153 55.7 43.1 1.1

2022q3

Wave 4 1,150 57.0 42.0 1.0

3. Information Booklet use (Diary and Interview Surveys)

The Information Booklet is a recall aide the interviewer provides for respondents for both the Interview and Diary surveys, and each provides the response options for demographic questions and the income bracket response options. In addition, the Interview Information Booklet provides clarifying examples for the kinds of expenditures that each section/item code is intended to collect.

This metric measures the prevalence of Information Booklet use among respondents during their interviews, according to interviewers. For interviews conducted over the phone, the Information Booklet is typically not directly available to the respondent (although a PDF version is available on the BLS website), so this metric should be interpreted in conjunction with the rise in telephone interviews during the COVID-19 pandemic. Higher rates of Information Booklet usage are encouraged, as use can improve reporting quality by clarifying concepts and providing examples.

Diary Survey Summary

  • The prevalence of Information Booklet use among Diary Survey respondents decreased from 39.2 percent in 2019q3 to 26.8 percent in 2022q2 (Table 3.1).
  • The percentage of CUs that reported no Information Booklet usage increased from 58.1 percent in 2019q3 to 67.7 percent in 2022q2 (Table 3.1).
  • In mid-March 2020, CE suspended all in-person interviews and Information Booklet use declined by 29.0 percentage points from 2020q1 to 2020q2, resulting in a series low of 4.1 percent. Rates of Information Booklet usage have been recovering since, but still below pre-pandemic levels (Table 3.1).
Table 3.1 Diary Survey: prevalence of Information Booklet use among respondents
Quarter Number of respondents Used Did not use Missing response

2019q3

2,745 39.2 58.1 2.7

2019q4

2,553 37.1 59.6 3.3

2020q1

3,285 33.1 64.0 3.0

2020q2

1,936 4.1 94.0 1.9

2020q3

2,559 7.3 90.8 1.9

2020q4

2,835 10.5 86.4 3.1

2021q1

2,952 12.7 84.2 3.1

2021q2

3,224 16.7 79.6 3.7

2021q3

3,027 20.0 77.5 2.5

2021q4

2,864 22.2 71.3 6.4

2022q1

3,357 25.9 69.8 4.3

2022q2

3,239 26.8 67.7 5.5

Interview Survey Summary

  • In mid-March 2020, BLS temporarily discontinued the use of physical copies of the Information Booklet due to the COVID-19 pandemic and referred respondents to the online version. As a result, the Information Booklet use rate declined 44.1 percentage points for Wave 1 respondents from 2019q4 to 2020q2 (Table 3.2).
  • Declines in Information Booklet use were similar for subsequent waves and about 95 percent of all respondents in 2020q2 did not have access to the Information Booklet (Table 3.2).
  • In the beginning in July 2020, disposable copies of the Information Booklets were available for FRs to provide to respondents and Information Booklet use rose to an average of 5.3 percent for all waves in 2020q3 (Table 3.2).
  • Since then, Information Booklet use across all waves has continued to recover from the 2020q2 low (Table 3.2).
Table 3.2 Prevalence of Information Booklet use among Interview Survey respondents
Quarter Wave Number of respondents Used Did not use[6] Missing response

2019q4

Wave 1 1,318 46.7 16.5 0.8

2019q4

Wave 2 & 3 2,637 33.7 14.9 0.2

2019q4

Wave 4 1,293 32.3 15.3 0.5

2020q1

Wave 1 1,239 37.8 15.7 1.2

2020q1

Wave 2 & 3 2,601 28.1 13.9 0.4

2020q1

Wave 4 1,362 28.8 13.7 0.4

2020q2

Wave 1 965 2.6 1.8 0.8

2020q2

Wave 2 & 3 2,559 2.9 1.8 0.3

2020q2

Wave 4 1,334 3.4 0.8 0.5

2020q3

Wave 1 1,143 6.7 2.4 1.4

2020q3

Wave 2 & 3 2,444 4.8 2.7 0.3

2020q3

Wave 4 1,393 5.2 2.1 0.4

2020q4

Wave 1 1,230 12.4 6.7 0.3

2020q4

Waves 2 & 3 2,589 9.4 3.6 0.5

2020q4

Wave 4 1,386 7.4 3.8 0.2

2021q1

Wave 1 1,250 13.3 6.2 0.6

2021q1

Waves 2 & 3 2,515 9.3 3.3 0.4

2021q1

Wave 4 1,350 8.5 4.2 0.7

2021q2

Wave 1 1,325 14.9 7.8 0.6

2021q2

Waves 2 & 3 2,534 11.1 7.0 0.7

2021q2

Wave 4 1,337 9.6 5.2 0.6

2021q3

Wave 1 1,352 19.3 11.7 1.0

2021q3

Waves 2 & 3 2,488 12.7 7.4 0.8

2021q3

Wave 4 1,281 10.8 7.2 0.8

2021q4

Wave 1 1,229 25.1 9.3 0.8

2021q4

Waves 2 & 3 2,450 17.3 7.6 0.4

2021q4

Wave 4 1,223 15.3 6.1 0.7

2022q1

Wave 1 1,347 26.9 9.8 0.5

2022q1

Waves 2 & 3 2,551 18.8 8.2 0.4

2022q1

Wave 4 1,289 19.1 7.1 0.5

2022q2

Wave 1 1,325 31.2 10.5 1.1

2022q2

Waves 2 & 3 2,532 22.0 8.7 0.8

2022q2

Wave 4 1,320 20.5 8.6 0.5

2022q3

Wave 1 1,277 34.3 7.0 2.1

2022q3

Wave 2 & 3 2,153 24.1 6.9 1.1

2022q3

Wave 4 1,150 22.8 6.3 1.0

4. Expenditure edit rates (Diary and Interview Surveys)

The Expenditure edit rates metric measures the proportion of reported expenditure data that are edited. These edits are changes made to the reported expenditure data during CE data processing, excluding changes due to time period conversion calculations and top-coding or suppression of reported values. Top-coding and suppression are done to protect respondent confidentiality in the public-use microdata. More information on these concepts is available on the CE Website.

The Interview Survey expenditure edit rates are broken down into three categories: Imputation, Allocation, and Manual Edits:

  • Imputation replaces missing or invalid responses with a valid value.

  • Allocation edits are applied when respondents provide insufficient detail to meet tabulation requirements. For example, if a respondent provides a non-itemized total expenditure report for the category of fuels and utilities, that total amount will be allocated to the target items mentioned by the respondent (such as natural gas and electricity).

  • Manual edits occur whenever responses are directly edited by BLS economists based on their analysis and expert judgment.

The Diary survey expenditure edit rates are only broken down into two categories: Allocations and Other Edits. Most edits in the Diary survey are allocations. Table 4.1 below shows the "other edits" category, which covers all other expenditure edits including imputation and manual edits, and we can see from the data that these edits are relatively rare.

Beginning in 2022 the DQP team made a change to the way expenditure edit rates are measured in the Diary survey data, as changes to the alcohol cost flag are now considered an expenditure edit. This change was applied to the full metric series and has led to comparatively higher estimates for "Other Edits" and lower estimates for "Unedited" compared to previous reports.

Imputation in CE data results from expenditure amount nonresponse. Allocation is a consequence of responses lacking the required details for items asked by the survey. Lower edit rates are preferred, as it lowers the risk of processing error. However, edits based on sound methodology can improve the completeness of the data, and thereby reduce the risk of measurement error and nonresponse bias in survey estimates. Additional information on expenditure edits is available in the DQP Reference Guide (Armstrong, Jones, Miller, and Pham, 2023).

Diary Survey Summary

  • In the beginning of January 2020, an increase in CE's sample size resulted in the number of reported expenditures rising by over 22,000, but as response rates dropped in 2020q2, so did the number of expenditures.[7]
  • The total rate of unedited expenditure amounts fell 0.4 percentage point from 89.5 percent in 2019q3 to 89.1 percent in 2022q2.
  • Allocation rates have fluctuated during the period between 2019q3 and 2022q2 but were at 10.5 at both 2019q3 and 2022q2, reaching a high of 11.6 percent in 2020q3.
  • Other Edits saw a recent rise from 0.3 percent in 2021q3 to 0.8 percent in 2021q4 but has since fallen again to 0.4 percent in 2022q2.
Table 4.1 Diary Survey: Reported Expenditure Records
Quarter Number of Expenditures Allocated Other Edit Unedited

2019Q3

83,639 10.5 0 89.5

2019Q4

80,510 9.5 0 90.5

2020Q1

102,693 9.2 0 90.8

2020Q2

41,257 10.2 0.1 89.7

2020Q3

56,071 11.6 0 88.4

2020Q4

69,959 10.7 0 89.3

2021Q1

72,138 10.9 0.1 89.0

2021Q2

80,646 11.1 0.2 88.7

2021Q3

75,663 11.3 0.3 88.4

2021Q4

71,144 10.1 0.8 89.1

2022Q1

82,352 10.1 0.5 89.4

2022Q2

79,454 10.5 0.4 89.1

Interview Survey Summary

  • The total rate of unedited expenditure amounts increased 2.3 percentage points from 84.2 percent in
  • 2019q4 to 86.5 percent in 2022q3.
  • This was primarily driven by allocation rates declining 2.4 percentage points from 11.6 percent in 2019q4
  • to 9.2 percent in 2022q3.
  • Declines in allocation rates were partially offset by increases in the manual edit rate from 0.2 percent in
  • 2019q4 to 0.5 percent in 2022q3.
Table 4.2 Interview Survey: Reported Expenditure Records
Quarter Number of Expenditures Allocated Imputed Imputed & Allocated Manual Edit Unedited

2019Q4

244,834 11.6 3.8 0.2 0.2 84.2

2020Q1

246,488 11.6 3.9 0.2 0.2 84.1

2020Q2

217,785 11.9 4.1 0.2 0.1 83.6

2020Q3

224,639 11.6 4.3 0.2 0.3 83.6

2020Q4

232,195 11.6 4.3 0.2 0.3 83.6

2021Q1

231,850 11.2 3.9 0.2 0.6 84.0

2021Q2

232,282 10.1 4.5 0.2 0.2 85.0

2021Q3

231,351 10.1 4.0 0.2 0.5 85.2

2021Q4

222,027 9.8 3.7 0.2 0.6 85.7

2022Q1

231,495 9.4 3.6 0.2 0.5 86.4

2022Q2

229,608 9.3 3.8 0.2 0.5 86.3

2022Q3

215,674 9.2 3.7 0.1 0.5 86.5

5. Income imputation rates (Diary and Interview Surveys)

The Income imputation rates metric describes edits performed on a consumer unit's nonresponse to at least one source of income. This edit is based on three imputation methods, applicable to both CE Surveys:

  1. Model-based imputation: when the respondent mentions receipt of an income source but fails to report the amount.
  2. Bracket response imputation: when the respondent mentions receipt of an income source, but only reports that income as falling within a specified range.
  3. All valid blank (AVB) conversion: when the respondent reports no receipt of income from any source, but the CE imputes receipt from at least one source.

After imputation, income from each component source is summed to compute total income before taxes. In the text that follows, income before taxes is defined as "unimputed" if no source of total income required imputation for one of the three reasons identified above.  As stated, this applies to both the Diary and Interview Surveys.

Since the need for imputation reflects either item nonresponse or that insufficient item detail was provided (e.g., providing a range of income like "between $40,000 and $50,000" that offer little detail), lower imputation rates are desirable for lowering measurement error. However, imputation based on sound methodology can improve the completeness of the data and reduce the risk of nonresponse bias due to dropping incomplete cases from the dataset. Further details on the income imputation methodology can be found in the DQP Reference Guide (Armstrong, Jones, Miller, and Pham, 2023) and the User's Guide to Income Imputation in the CE (Paulin, Reyes-Morales, and Fisher, 2018).

Diary Survey Summary

  • The rate of unimputed total income before taxes increased 3.4 percentage points from 52.4 percent in 2019q3 to 55.8 percent in 2022q2 (Table 5.1).
  • Model-based imputation rates rose from 18.5 percent in 2019q3 to 18.9 percent in 2022q2, showing a 0.4 percent increase (Table 5.1).
  • Increases in both model-based imputation and unimputed rates corresponded with the 3.4 percentage point drop in bracket response imputation rates from 22.1 percent in 2019q3 to 18.7 percent in 2022q2 (Table 5.1).
Table 5.1 Diary Survey: income imputation rates for total amount of family income before taxes
Quarter Number of respondents Valid blanks converted (AVB) Bracket imputation Model imputation Model & bracket Unedited

2019q3

2,745 2.1 22.1 18.5 4.9 52.4

2019q4

2,553 2.6 19.2 15.2 6.5 56.4

2020q1

3,285 1.9 20.0 17.5 5.1 55.5

2020q2

1,936 1.5 20.8 16.5 6.2 55.5

2020q3

2,559 2.6 18.1 19.5 6.7 53.1

2020q4

2,835 1.9 18.9 19.9 6.0 53.3

2021q1

2,952 2.0 18.7 18.4 5.6 55.2

2021q2

3,224 2.1 17.5 19.9 5.6 54.9

2021q3

3,027 2.5 19.3 18.4 5.3 54.5

2021q4

2,864 2.4 17.8 22.4 4.6 52.8

2022q1

3,357 2.3 19.0  19.5 4.5 54.7

2022q2

3,239 2.3 18.7 18.9 4.4 55.8

Interview Survey Summary

  • The rate of unimputed total income before taxes saw a drop in the latter half of 2020 but has returned to previous levels in 2021q4. Overall, this unedited rate increased 0.8 percentage points from 57.5 percent in 2019q4 to 58.3 percent in 2022q3 (Table 5.2).
  • Model-based imputation rates rose 0.2 percentage points from 17.2 percent in 2019q4 to 17.4 percent in 2022q3 (Table 5.2).
  • The increases to unimputed and model-based imputation rates led the bracket imputation rates to decline form 18.9 percent in 2019q4 to 17.9 percent in 2022q3 (Table 5.2).
Table 5.2 Interview Survey: income imputation rates for total amount of family income before taxes
Quarter Number of respondents Valid blanks converted (AVB) Bracket imputation Model imputation Model & bracket Unedited

2019q4

5,248 1.4 18.9 17.2 5.0 57.5

2020q1

5,202 1.3 18.6 17.6 4.5 58.1

2020q2

4,858 1.2 18.1 18.7 4.9 57.1

2020q3

4,980 1.1 18.2 19.0 5.1 56.6

2020q4

5,205 1.3 18.2 20.3 5.5 54.7

2021q1

5,115 1.4 17.8 19.9 5.5 55.5

2021q2

5,196 1.3 17.4 20.5 5.8 55.0

2021q3

5,121 1.2 18.1 19.7 5.4 55.5

2021q4

4,902 1.4 17.1 18.6 5.3 57.5

2022q1

5,187 1.3 17.8 17.9 5.2 57.8

2022q2

5,177 1.4 17.0 18.3 5.4 58.0

2022q3

4,580 1.1 17.9 17.4 5.3 58.3

6. Respondent burden (Interview Survey)

Respondent burden in the Interview survey relates to the perceived level of effort exerted by respondents in answering the survey question. Survey designers are concerned about respondent burden as it has the potential to negatively impact response rates and overall response quality. Beginning in April 2017, the Interview Survey introduced a respondent burden question with response options describing five different levels of burden at the end of the Wave 4 interview. The respondent burden metric is derived from this question and maps the five burden categories to three metric values: not burdensome, some burden, and very burdensome. Please see the DQP Reference Guide (Armstrong, Jones, Miller, and Pham, 2023) for more details on the question wording and the burden categories.

A caveat to the interpretation of this metric is that since the burden question is only asked at the end of Wave 4, the metric may underestimate survey burden due to self-selection bias. That is, respondents who have agreed to participate in the final wave of the survey presumably find the survey less burdensome than sample units who had dropped out at any point prior to completing the final survey wave.

However, it is also possible that the respondent answering this question did not participate in prior interview waves. For example, the respondent who participated in the first three survey waves might move out of the sampled address prior to the final interview. This is not a common occurrence, but if someone else moves into the sampled address in time for the final wave, then they would be asked these questions.

Interview Survey Summary

  • In the presented three years, the percentage of respondents who report perceiving no burden has generally decreased, falling from 32.9 percent in 2019q4 to 27.1 percent in 2022q3. This decline included a series low of 24.2 percent in 2021q4 (Table 6.1).
  • In contrast to the general decrease in the rate of respondents reporting no burden, the percentage of respondents who reported some burden increased from 53.8 percent in 2019q4 to 57.1 percent in 2022q3 (Table 6.1).
  • Rates of respondents who felt that the survey was very burdensome increased from 11.3 percentage points in 2019q4 to 13.4 percentage points in 2022q3, a total increase of 2.1 percent. There was meaningful variation across the period, including a peak of 16.3 percent in 2022q1 (Table 6.1).
Table 6.1 Interview Survey: respondents' perceived burden in the final survey wave
Quarter Number of respondents Not burdensome Some burden Very burdensome Missing response

2019q4

1,293 32.9 53.8 11.3 2.0

2020q1

1,362 30.8 54.0 12.0 3.2

2020q2

1,334 30.7 54.3 12.5 2.5

2020q3

1,393 30.5 54.1 12.8 2.7

2020q4

1,386 29.7 53.5 14.9 1.9

2021q1

1,350 26.0 55.0 15.6 3.4

2021q2

1,337 29.0 55.8 12.3 2.9

2021q3

1,281 27.9 53.9 15.4 2.7

2021q4

1,223 24.2 57.9 15.3 2.6

2022q1

1,289 26.3 55.2 16.3 2.2

2022q2

1,320 28.4 54.7 14.6 2.3

2022q3

1,150 27.1 57.1 13.4 2.3

7. Survey mode (Diary and Interview Surveys)

These metrics measure the mode of data collection for the Diary and the Interview Surveys.

In the Diary Survey, there are two dimensions to the 'mode' of data collection. The first measures how data about the household (e.g., household size, demographics characteristics, income and assets, etc.) were collected by the Census Field Representative (mostly in-person or mostly over the phone), and the second measures the diary form used by respondents when entering expense during the diary keeping period (online or paper). Until recently, the Diary Survey was administered strictly in paper form, but as part of the CE program's redesign effort, a new online diary mode was introduced.[8] This new mode prompted the inclusion of a quality metric that tracks the mode of diary chosen by the respondent at the time of placement. It should be noted that while the online diary became available in July 2020 as a supplemental data collection tool during the onset of the COVID-19 pandemic, it was not officially implemented into CE production until July 2022.

The Interview Survey was designed to be an in-person interview; however, the interviewer can also collect data over the phone, or by a combination of the two modes. Higher rates of in-person data collection are preferred since the interviewer can actively prompt the respondent, as well as encourage the use of recall aids, thereby reducing the risk of measurement error. Conducting first wave interviews in-person is especially important as this is typically the respondent's first experience with the survey, and it affords the Census FR the opportunity to build rapport with the household. Additionally, BLS has agreements with the Census Bureau that no more than 24 percent of first interviews or 48 percent of subsequent interviews will be collected over the phone when possible for FRs. More information on how we calculate the mode metrics can be found in the DQP Reference Guide (Armstrong, Jones, Miller, and Pham, 2023).

Diary Survey Mode

  • The rate of in-person collection for diary household data has continued to rise after dropping to nearly zero in 2020q2 during the onset of the COVID-19 pandemic.
  • As of 2022q2 the rate of in-person collection for diary household data is 69.6 percent. Despite this continued rise, the proportion of diary cases where most household information is collected in-person has not yet recovered to pre-pandemic levels (Table 7.1).
  • In 2022q2, the proportion of online diary cases rose by 0.9 percentage points from 27.8 to 28.7 while the share of paper diary cases fell slightly from 69.1 to 68.4 in that same quarter (Table 7.2).
  • Generally, the proportion of online and paper expenditure diaries used by respondents has remained steady.
Table 7.1 Diary Survey: Interview Phase Mode
Quarter Number of Diary Cases In-Person Telephone Missing

2019q3

2,745 92.3 5.1 2.6

2019q4

2,553 91.4 5.3 3.3

2020q1

3,285 76.3 20.8 2.9

2020q2

1,936 0.9 97.2 1.9

2020q3

2,559 24.5 73.6 1.9

2020q4

2,835 43.8 53.1 3.1

2021q1

2,952 46.5 50.3 3.2

2021q2

3,224 59.6 36.7 3.7

2021q3

3,027 64.6 32.9 2.5

2021q4

2,864 60.8 32.8 6.4

2022q1

3,357 63.1 32.7 4.2

2022q2

3,239 69.6 25.0 5.4

Table 7.2 Diary Survey: survey mode
Quarter Number of Diary Cases Paper Online Missing

2020q3

2,559 66.3 33.1 0.6

2020q4

2,835 71.3 26.8 1.9

2021q1

2,952 71.2 27.2 1.6

2021q2

3,224 70.8 27.1 2.1

2021q3

3,027 70.5 27.9 1.6

2021q4

2,864 69.6 26.1 4.3

2022q1

3,357 69.1 27.8 3.1

2022q2

3,239 68.4 28.7 2.9

Interview Survey Summary

  • Prior to the onset of the COVID-19 pandemic in early 2020, the proportion of in-person interviews across all waves remained steadily above 50 percent before beginning to drop in 2020q1 and hitting a low point in 2020q2 (Graph 7.3).
  • In mid-March 2020, the Census Bureau suspended all in-person interviews, and by April, about 98 percent of all interviews were conducted over the phone regardless of wave (Table 7.3).
  • Beginning in July 2020, interviewers were allowed to resume in-person interviews, depending on local rules. From 2020q3 to 2022q3 the rate of in-person interviews increased across all waves from 9.3 percent to 35.9 percent, but this trend of recovery toward the pre-pandemic levels has not been smooth.
  • The stall in improvement occurred mostly between 2021q4 and 2022q1, where the proportion of in-person interviews did not improve past 31 percent.
  • Encouragingly, the rate of in-person interviews increased in 2022q2 to 36.4 percent, but fell again in 2022q3 to 35.9 percent.
Table 7.3 Interview Survey: survey mode
Quarter Number of respondents In-person Telephone Missing

2019q4

5,248 61.9 37.8 0.3

2020q1

5,202 53.1 46.5 0.4

2020q2

4,858 1.7 98.0 0.3

2020q3

4,980 9.3 90.4 0.3

2020q4

5,205 19.5 80.3 0.2

2021q1

5,115 18.1 81.6 0.3

2021q2

5,196 26.3 73.4 0.3

2021q3

5,121 31.8 67.8 0.4

2021q4

4,902 30.7 69.0 0.3

2022q1

5,187 31.0 68.8 0.2

2022q2

5,177 36.4 63.1 0.5

2022q3

4,580 35.9 63.0 1.1

8. Survey Response Time (Diary and Interview Surveys)

In both the Interview and Diary Surveys, survey response time is defined as the number of minutes needed to complete an interview. For the Diary Survey, the survey response time metric is the median number of minutes to complete the personal interview component that collects household information on income and demographics. For the Interview Survey, the survey response time metric is the median number of minutes to complete the interview. In the Interview Survey, wave 1 & 4 interviews are typically longer because they collect additional information, like household demographics or assets and liabilities. Survey response time is used in CE as an objective indicator for respondent burden: the longer the time needed to complete the survey, the more burdensome the survey. Fricker, Gonzalez, and Tan (2011) find that higher respondent burden negatively affects both response rates and data quality. However, survey response time could also reflect the respondent's degree of engagement. Engaged and conscientious respondents might take longer to complete the survey because they report more thoroughly or use records more extensively. Tracking the median survey response time can be useful for assessing the effect of changes in the survey design.

Diary Survey Summary

  • While the median Diary Survey response time only rose 0.8 minutes from 34.3 in 2019q3 to 35.1 in 2022q2, the metric did experience some variation throughout the period (Table 8.1).
  • After remaining above 33.3 minutes for the second half of 2019, and the first three quarters of 2020, the median diary survey time fell to 32.7 minutes in 2020q4 (Table 8.1).
  • Response time remained below 33.0 minutes from 2021q1 to 2021q3 before jumping back up to 34.9 minutes in 2021q4 (Table 8.1).
Table 8.1 Diary Survey: median length of time to complete the interview components (income and demographics)
Quarter Number of Diary Cases Minutes

2019q3

2,745 34.3

2019q4

2,553 34.4

2020q1

3,281 33.3

2020q2

1,936 34.9

2020q3

2,559 34.9

2020q4

2,835 32.7

2021q1

2,952 32.7

2021q2

3,224 32.9

2021q3

3,027 32.4

2021q4

2,864 34.9

2022q1

3,357 34.4

2022q2

3,239 35.1

Interview Survey Summary

  • Median total time for the Interview Survey increased across all waves in 2022Q3 following the implementation of CARI.
  • Median time for Wave 1 interviews fluctuated over the past three years between 74.4 and 88.5 minutes (Table 8.2).
  • In the last three years, median time to complete Waves 2 and 3 interviews ranged between 53.3 and 61.8 minutes (Table 8.2).
  • For Wave 4 interviews, median interview time ranged between 58.8 and 69.5 (Table 8.2).
  • Median times for Waves 2 & 3 and Wave 4, remained steady between 2019q4 and 2021q3, but jumped up above the previous range in 2021q4 to 57.8 and 69.5 minutes respectively. For Wave 4 interviews, the main source of this fluctuation was likely the test of Computer Assisted Recorded Interviewing (CARI) for 4th wave participants in 2021q4 (Table 8.2).
Table 8.2 Interview Survey: median length of time to complete survey
Quarter Number of respondents Wave 1 Waves 2 & 3 Wave 4

2019q4

5,239 77.4 53.3 60.8

2020q1

5,199 78.8 56.0 59.9

2020q2

4,855 76.4 54.6 62.2

2020q3

4,980 76.8 56.7 62.2

2020q4

5,205 75.0 56.2 60.4

2021q1

5,115 74.4 54.6 61.7

2021q2

5,196 76.7 54.6 58.8

2021q3

5,121 78.0 54.6 60.0

2021q4

4,902 80.2 57.8 69.5

2022q1

5,187 79.6 57.7 62.8

2022q2

5,177 79.2 57.7 63.1

2022q3

4,580 88.5 61.8 69.2

Summary

BLS is committed to producing data that are consistently of high statistical quality. As part of that commitment, BLS publishes the DQP and its accompanying Reference Guide (Armstrong, Jones, Miller, and Pham, 2023) to assist data users as they evaluate CE data quality metrics and judge whether CE data fit their needs. DQP metrics therefore cover both the Interview and Diary Surveys, multiple dimensions of data quality, and several stages of the survey lifecycle. Additionally, BLS uses these metrics internally to identify areas for potential survey improvement, evaluate the effects of survey changes, and to monitor the health of the surveys.

Response rates for the Diary Survey improved steadily following the precipitous decrease in early 2020, associated with the onset of the COVID-19 pandemic, though in the past five quarters, the interview rate has fluctuated. This fluctuation has resulted in little net improvement to response rates over the period. Response rates in the Interview Survey, on the other hand, largely stalled following the drop off in early 2020 and have since continued to decline further. Although, the noticeable drop in response rates in the most recent quarter was attributable to cost saving measures employed by Census during that time period.

Perhaps the most noteworthy finding in the metric data was the sharp increase in median Interview Survey time across all waves, which coincided with the implementation of Computer Audio-Recorded Interviewing (CARI) in 2022q3. This finding was largely expected, as the CE's previous test of CARI on wave 4 interviews in 2021q4 resulted in an increased median interview time for wave 4 respondents. Internal CE research is being conducted on CARI that will further analyze this relationship with median survey time.

With respect to respondent burden in the Interview Survey, the rate of respondents who reported being "not burdened" by the Interview Survey has fallen since the beginning of 2020. Interestingly though, the recent jump in median Interview Survey time, an objective measure of burden, did not correspond to a commensurate increase in reported burden by respondents.

Record use in the Interview Survey fluctuated in the two most recent quarters but in general has been on an upward trend since 2021q3. This is a positive finding, as past CE research indicates that record use is a helpful tool for improving data quality (Wilson, T. J., 2017).

Interview Survey Mode and Information Booklet Use still appear to be on a path toward their pre-COVID figures, but the recovery is slow. In-person household data collection for the Diary Survey on the other hand has improved much more rapidly. Another positive of note is the slow decrease over the past two years in the percentage of allocations and imputations in the Interview Survey expenditure edits. Several metrics showed little change. Income imputation for the Diary Survey and the Interview Survey remained fairly stable over the time period covered, as did Expenditure edit rates and Median survey time in the Diary Survey.

BLS will continue to monitor these trends, and the next issue of the CE Data Quality Profile will be released in the September of 2023 with BLS's annual release of 2022 CE data. This report will feature CE Diary Survey data through 2022q4 and CE Interview Survey data through 2023q1.

References

Abdirizak, S., Erhard, L., Lee, Y., & McBride, B. (2017). Enhancing Data Quality Using Expenditure Records. Paper Presented at the Annual Conference of the American Association for Public Opinion Research, New Orleans, LA.

Armstrong, G., G. Jones, T. Miller, and S. Pham (2023). CE Data Quality Profile Reference Guide. Program Report Series, the Consumer Expenditure Surveys. Bureau of Labor Statistics.

Ash, S., B. Nix, and B. Steinberg (2022). Report on Nonresponse Bias during the COVID-19 Period for the Consumer Expenditures Interview Survey. Published as part of the Consumer Expenditure Surveys Program Report Series. Bureau of Labor Statistics.

Elkin, I., B. McBride, and B. Steinberg (2018). Results from the Incentives Field Test for the Consumer Expenditure Survey Interview Survey. Published as part of the Consumer Expenditure Surveys Program Report Series. Bureau of Labor Statistics.

Fricker, S., Gonzalez, J., & Tan, L. (2011). Are you burdened? Let's find out. Paper Presented at the Annual Conference of the American Association for Public Opinion Research, Phoenix, AZ.

Paulin, G., Reyes-Morales, S., & Fisher, J (2018). User's Guide to Income Imputation in the CE. U.S. Bureau of Labor Statistics.

Wilson, T. J. (2017). The Impact of Record Use in the CE Interview Survey. CE Survey Methods Symposium. Bureau of Labor Statistics.

Footnotes


[1] The Office of Management and Budget has oversight over all Federal surveys and provides the rules under which they operate.  See the Federal Register notice for more details.

[2] Instructions on using the CE PUMD to create variables and flags for quality analysis can be found in the CE PUMD Getting Started Guide.
[3] The Diary Survey’s sample size increased in 2020q1 to support the Consumer Price Index’s Commodities and Services Survey sample frame.
[4] It should also be noted that in the nonresponse reclassification tables, the COVID-19 reclassifications dropped to zero for both the Diary Survey and the Interview Survey in 2021q2 due to the Census Bureau taking over the reclassification process. Now, BLS receives the data with the correct final outcomes, so there is no in-house reclassification process that would present itself in these tables.
[5] The In the Interview Survey, each family in the sample is interviewed every 3 months over four calendar quarters. These interviews are commonly referred to as waves. For more information on survey administration please see the CE handbook of methods.
[6] The This “Did not use” category does not include records where there was no Information Booklet available.
[7] This increase in sample size was made possible by increased funding to accommodate collection of outlet information needed for calculating the Consumer Price Index.
[8] The Gemini Project was launched to research and develop a redesign of the Consumer Expenditure (CE) surveys, addressing issues of measurement error and respondent burden.

Last Modified Date: May 4, 2023