Department of Labor Logo United States Department of Labor
Dot gov

The .gov means it's official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you're on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Consumer Expenditure Surveys

The CE Annual Data Quality Profile - 2022

Authors: Grayson Armstrong, Gray Jones, Tucker Miller, and Sharon Pham

This paper was published as part of the Consumer Expenditure Surveys Program Report Series.

*This report was updated on December 13, 2023 after an error in the data was identified and corrected. More information about the specifics of the update can be found in the official errata notice.

Table of Contents

Overview
Highlights
1. Final disposition rates of eligible sample units (Diary and Interview Surveys)
2. Records Use (Interview Survey)
3. Information Booklet use (Diary and Interview Surveys)
4. Expenditure edit rates (Diary and Interview Surveys)
5. Income imputation rates (Diary and Interview Surveys)
6. Respondent burden (Interview Survey)
7. Survey mode (Diary and Interview Surveys)
8. Survey Response Time (Diary and Interview Surveys)
Summary
References



Overview

The Bureau of Labor Statistics (BLS) is committed to producing data that are of consistently high quality (i.e., accurate, objective, relevant, timely, and accessible) in accordance with Statistical Policy Directive No. 1.[1] This Directive, issued by the Office of Management and Budget, affirms the fundamental responsibilities of Federal Statistical Agencies, and recognized statistical units in the design, collection, processing, editing, compilation, storage, analysis, release, and dissemination of statistical information. The BLS Consumer Expenditure Surveys (CE) program provides data users with a variety of resources to assist them in analyzing overall CE data quality. CE data users can evaluate quality on their own by utilizing the following:

In addition, the Data Quality Profile (DQP) provides a comprehensive set of quality metrics that are timely, routinely updated, and accessible to users. For data users, DQP metrics are an indication of quality for both the Interview Survey and the Diary Survey.[4] For internal stakeholders, these metrics signal areas for improvements to the surveys.

This DQP includes, for each metric, a brief description of the metric, along with accompanying results, which are tabulated and graphed. The DQP Reference Guide gives detailed descriptions of the metrics, computations, and methodology (Armstrong, Jones, Miller & Pham 2023). The intention of the DQP report series is to highlight recent trends that may impact CE data quality, and for this purpose, the DQP reports cover the three most recent years of available data.

Prior DQPs are available on the CE Library Page. BLS began publishing annual DQPs beginning with 2017 data, though prototype DQPs are available for 2013 and 2015. Midyear DQPs started with the 2020 midyear data release, which covered the period of July 2019 through June 2020.

The data quality metrics are reported in quarterly format, where the quarter is the three-month period in which the survey data were collected. Because Interview Survey respondents are asked to recall expenditures from the prior three months, the data collected in 2023q1 include expenditures made in 2022q4. For example, an interview conducted in February 2023 would include expenditures from November 2022, December 2022, and January 2023. In contrast, respondents to the Diary Survey report expenditures on the days they were incurred in the two-week diary keeping period. This is why this report’s Interview Survey metrics appear to be “ahead” of the Diary Survey by a quarter (e.g., 2023q1 for the Interview Survey and 2022q4 for the Diary Survey).


Highlights

In this section, we highlight noteworthy metric trends from the past three years. This time frame covers the first quarter of 2020 to the fourth quarter of 2022 for the CE Diary survey, and the second quarter of 2020 to the first quarter of 2023 for the CE Interview survey. Subsequent sections describe the individual metrics with detailed data tables.

Recent Trends of Note

  • Diary Survey refusal rates have trended upward over the past two years according to data available, from 34.4 percent in 2021q1 to 39.8 percent in 2022q4. This rise in refusals is partially responsible for the lack of growth in response rates over that same period (Table 1.1).

  • Interview Survey nonresponse classifications in 2022q3 differ from other quarters under study, due to the Census Bureau reducing contact attempts as a part of cost-cutting measures in place during August and September. As a result, Interview Survey response rates dropped by 5.4 percent in one quarter, from 46.2 percent in 2022q2 to 40.8 percent in 2022q3 (Table 1.3)

  • Since 2021, record use in the Interview Survey has been on an upward trend across all waves (Chart 2.1).

  • Information booklet use has continued to rise in both CE Surveys over the past two years of available data (Graph 3.1 & Graph 3.2).

  • Diary Survey expenditure edits experienced an uptick in the proportion of "Other Edits" in the two most recent quarters of data. This was likely due to a processing error that led to an inflated number of flagged "invalid blank" values for the alcohol cost edit (Chart 4.1).

  • In the Interview survey, the rate of Wave 1 interviews conducted in-person increased to 50.3 percent in 2022q2, following a low of 1.5 percent in 2020q2, and has since hovered between 49.3 and 46.8 percent (Table 7.4).
  • Wave 1 and Wave 4 median interview times have decreased in recent quarters, after an increase that followed the implementation of Computer Assisted Recorded Interviewing (CARI) in 2023q3.  Median interview times for Wave 2 & 3 interviews on the other hand have continued to rise (Table 8.2).

1. Final disposition rates of eligible sample units (Diary and Interview Surveys)

Final disposition rates of eligible sample units represent the final participation outcomes of field staff's survey recruitment efforts. The BLS classifies the final outcome of eligible sample units into the following four main categories:

  1. Completed interview
  2. Nonresponse due to refusal
  3. Nonresponse due to noncontact
  4. Nonresponse due to other reasons

Completed interviews reclassified to a nonresponse by BLS staff are included in the other nonresponse category and are presented in the nonresponse reclassification tables (Tables 1.2 and 1.4). More information on the nonresponse reclassification edit and other nonresponse categories, along with information on how BLS staff calculate response rates can be found in the DQP Reference Guide (Armstrong, Jones, Miller, and Pham, 2023).

The key point of interest regarding response rates is that low response rates can indicate the potential for nonresponse bias of an expenditure estimate if the cause of nonresponse is correlated with that expenditure category. While recent research on nonresponse bias has not shown statistically significant bias in the CE survey estimates during the COVID-19 pandemic (Ash, Nix, and Steinberg, 2022), BLS continues to monitor this risk.[5]

Response rates in this report are presented as unweighted, because unweighted rates measure the effectiveness of our data collection efforts. When response rates were previously calculated using weights, they showed no meaningful difference from the unweighted rates.          

Diary Survey Summary

  • In March 2020, the Census Bureau temporarily suspended in-person diary placement interviews due to the COVID-19 pandemic. This resulted in declines in response, refusal, and noncontact rates and a massive increase in other nonresponse rates (Table 1.1). Since the start of the three-year period coincides with the onset of the pandemic, when final disposition rates were anomalous, final disposition rates will be compared to 2021q1.

  • Response rates varied between 37.0 percent and 43.9 percent from 2021q1 to 2022q4 (Table 1.1).

  • Refusal rates increased by 5.4 percent since 2021q1 (Table 1.1). The refusal rate exceeded the response rate in 2021q4 and 2022q4 by 1.7 percent and 2.8 percent, respectively (Table 1.1).

  • Noncontact rates rose 4.5 percentage points from 7.6 in 2021q1 to 12.1 in 2022q4 (Table 1.1).

  • Overall, other nonresponse rates declined from 18.6 percent in 2021q1 to 11.1 percent in 2022q4.

Table 1.1 CE Diary Survey: distribution of final dispositions for eligible sample units (unweighted)
Quarter Number of eligible sample units Interview Refusal Noncontact Other Nonresponse

2020q1

7,474 44.0 22.5 7.3 26.3

2020Q2

7,409 26.1 12.1 2.7 59.1

2020q3

7,784 32.9 22.2 7.2 37.7

2020q4

7,774 36.5 34.7 10.1 18.8

2021q1

7,488 39.4 34.4 7.6 18.6

2021q2

7,584 42.5 34.9 8.8 13.8

2021q3

7,456 40.7 37.0 11.1 11.2

2021q4

7,676 37.3 39.0 11.9 11.8

2022q1

7,645 43.9 36.3 9.5 10.3

2022q2

7,556 42.9 36.9 9.8 10.4

2022q3

7,594 42.8 36.9 9.7 10.7

2022q4

7,749 37.0 39.8 12.1 11.1

Table 1.2 Diary Survey: prevalence of nonresponse reclassifications
Quarter Number of eligible sample units Total reclassifications COVID-19 reclassifications Other reclassifications

2020q1

7,474[6] 855 562 293

2020q2

7,409 3,393 3,202 191

2020q3

7,784 250 34 216

2020q4

7,774 248 10 238

2021q1

7,488 374 2 372

2021q2

7,584 353 0 353

2021q3

7,456 348 0 348

2021q4

7,676 387 0 387

2022q1

7,645 362 0 362

2022q2

7,556 377 0 377

2022q3

7,594 348 0 348

2022q4

7,749 345 0 345

Interview Survey Summary

  • Since the start of the three-year period coincides with the onset of the pandemic, final disposition rates will be compared to 2021q1.

  • Response rates have declined from 46.0 percent in 2021q1 to 42.5 percent in 2023q1.

  • All interview nonresponse classifications experienced an anomaly in 2022q3 due to the Census Bureau reducing contact attempts as a part of cost-cutting measures in place during August and September.

  • Outside of this anomaly in 2022q3, refusal rates have ranged between 33.9 percent and 44.3 percent since 2021q1 (Table 1.3).

    • In 2021q4, the refusal rate (44.3 percent) exceeded the response rate (43.5 percent) for the first time by 0.8 percentage points (Table 1.3).

  • Noncontact rates increased substantially since 2021q1 (6.8 percent), reaching 20.1 percent in 2023q1 (Table 1.3).
    • Noncontact rates reached a series high of 23.0 percent in 2022q4 (Table 1.3).

  • Excluding the cost cutting period, other nonresponse rates generally declined from 8.3 percent in 2021q1 to 0.9 percent in 2023q1 (Table 1.3).

Table 1.3 Interview Survey: distribution of final dispositions for eligible sample units (unweighted)
Quarter Number of eligible sample units Interview Refusal Noncontact Other Nonresponse

2020q2

10,581 45.9 15.4 0.8 37.9

2020q3

11,190 44.5 24.2 4.0 27.4

2020q4

11,185 46.5 36.8 6.3 10.4

2021q1

11,125 46.0 38.9 6.8 8.3

2021q2

11,120 46.7 41.1 9.5 2.7

2021q3

11,117 46.1 43.0 8.4 2.5

2021q4

11,275 43.5 44.3 9.9 2.3

2022q1

11,320 45.8 42.8 9.3 2.1

2022q2

11,202 46.2 43.5 8.3 2.0

2022q3

11,235 40.8 23.8 16.4 19.0

2022q4

11,248 41.0 33.9 23.0 2.0

2023q1

11,299 42.5 36.5 20.1 0.9

Table 1.4 Interview Survey: prevalence of nonresponse reclassifications
Quarter Number of eligible sample units Total reclassifications COVID-19 reclassifications Other reclassifications

2020q2

10,581 2,955 2,944 11

2020q3

11,190 88 74 14

2020q4

11,185 32 14 18

2021q1

11,125 72 2 70

2021q2

11,120 522 0 522

2021q3

11,117 156 0 156

2021q4

11,275 16 0 16

2022q1

11,320 13 0 13

2022q2

11,202 13 0 13

2022q3

11,235 3 0 3

2022q4

11,248 10 0 10

2023q1

11,299 11 0 11

2. Records Use (Interview Survey)

The Records Use metric measures the proportion of respondents who refer to records while answering the Interview Survey questions, according to the Census Field Representative. Examples of records include, but are not limited to: receipts, bills, checkbooks, and bank statements. Records use is retrospectively recorded by the interviewer at the end of the interview. Past research has shown that respondents who use expenditure records report more expenditures with lower rates of missing data (Abdirizak, Erhard, Lee, and McBride, 2017), so a higher prevalence of records use is desirable. Metrics in this section are presented by survey wave.[7]

Interview Survey Summary

  • Records usage temporarily rose in 2016 for Wave 1 respondents. This is likely a result of a field test conducted in that year that gave a subset of respondents monetary incentives (Elkin, McBride, and Steinberg, 2018) to use records (Table 2.1).

  • Since 2021q3, records use across all waves have been on an upward trend (Graph 2.1).

    • Records use in Wave 1 experienced a three-year high of 60.8 percent in 2023q1 (Table 2.1).

    • Records use in Waves 2 – 3 recorded a series high of 59.8 percent in 2023q1 (Table 2.1).

    • Records use in Wave 4 recorded a series high of 61.7 percent in 2023q1 (Table 2.1).

Table 2.1 Interview Survey: prevalence of records use among respondents
Quarter Wave Number of respondents Used Did not use Missing response

2020q2

Wave 1 965 51.9 47.3 0.8

2020q2

Waves 2 & 3 2,559 50.0 49.7 0.3

2020q2

Wave 4 1,334 52.4 47.1 0.5

2020q3

Wave 1 1,143 49.3 49.3 1.4

2020q3

Waves 2 & 3 2,444 49.4 50.3 0.3

2020q3

Wave 4 1,393 51.0 48.7 0.4

2020q4

Wave 1 1,230 50.1 49.6 0.3

2020q4

Waves 2 & 3 2,589 50.1 49.3 0.5

2020q4

Wave 4 1,386 51.9 47.8 0.2

2021q1

Wave 1 1,250 52.0 47.4 0.6

2021q1

Waves 2 & 3 2,515 50.3 49.4 0.4

2021q1

Wave 4 1,350 52.4 47.0 0.7

2021q2

Wave 1 1,325 49.8 49.6 0.6

2021q2

Waves 2 & 3 2,534 47.8 51.4 0.7

2021q2

Wave 4 1,337 50.5 48.9 0.6

2021q3

Wave 1 1,352 53.0 46.1 1.0

2021q3

Waves 2 & 3 2,488 48.6 50.6 0.8

2021q3

Wave 4 1,281 49.6 49.6 0.8

2021q4

Wave 1 1,229 54.8 44.4 0.8

2021q4

Waves 2 & 3 2,450 53.2 46.4 0.4

2021q4

Wave 4 1,223 54.0 45.3 0.7

2022q1

Wave 1 1,347 60.3 39.2 0.5

2022q1

Waves 2 & 3 2,551 53.9 45.7 0.4

2022q1

Wave 4 1,289 56.7 42.7 0.5

2022q2

Wave 1 1,325 55.4 43.5 1.1

2022q2

Waves 2 & 3 2,532 52.6 46.6 0.8

2022q2

Wave 4 1,320 54.4 45.1 0.5

2022q3

Wave 1 1,277 57.6 40.3 2.1

2022q3

Waves 2 & 3 2,153 55.7 43.1 1.1

2022q3

Wave 4 1,150 57.0 42.0 1.0

2022q4

Wave 1 1,234 57.1 40.5 2.4

2022q4

Waves 2 & 3 2,258 55.4 42.9 1.7

2022q4

Wave 4 1,125 59.0 40.0 1.0

2023q1

Wave 1 1,288 60.8 37.6 1.6

2023q1

Waves 2 & 3 2,400 59.8 39.4 0.8

2023q1

Wave 4 1,119 61.7 37.6 0.7

3. Information Booklet use (Diary and Interview Surveys)

The Information Booklet is a recall aide the Census Field Representative provides to respondents for both the Interview and Diary surveys, and each provides response options for demographic questions and the income bracket response options. In addition, the Interview Information Booklet provides clarifying examples for the kinds of expenditures that each section/item code is intended to collect.

This metric measures the prevalence of Information Booklet use among respondents during their interviews, according to Census Field Representative. For interviews conducted over the phone, the Information Booklet is typically not readily available to the respondent (although a PDF version is available on the BLS website), so this metric should be interpreted in conjunction with the rise in telephone interviews during the COVID-19 pandemic. Higher rates of Information Booklet usage are encouraged, as use can improve reporting quality by clarifying concepts and providing examples.

Diary Survey Summary

  • In mid-March 2020, CE suspended all in-person interviews and Information Booklet use declined by 29 percentage points from 2020q1 to 2020q2 (Table 3.1).

  • In 2020q2 Information Booklet usage fell to a record low of 4.1 percent, since then the percentage of CUs that used the Information Booklet has been steadily increasing but has yet to recover to its pre-pandemic level from 2020q1 (Table 3.1).

  • The prevalence of Information Booklet use among Diary Survey respondents decreased from 33.1 percent in 2020q1 to 27.8 percent in 2022q4 (Table 3.1).

Table 3.1 Diary Survey: prevalence of Information Booklet use among respondents
Quarter Number of respondents Used Did not use Missing response

2020q1

3,285 33.1 64.0 3.0

2020q2

1,936 4.1 94.0 1.9

2020q3

2,559 7.3 90.8 1.9

2020q4

2,835 10.5 86.4 3.1

2021q1

2,952 12.7 84.2 3.1

2021q2

3,224 16.7 79.6 3.7

2021q3

3,027 20.0 77.5 2.5

2021q4

2,864 22.2 71.3 6.4

2022q1

3,357 25.9 69.8 4.3

2022q2

3,239 26.8 67.7 5.5

2022q3

3,248 27.9 68.1 4.0
2022q4 2,865 27.8 68.1 4.1

Interview Survey Summary

  • Due to the COVID-19 pandemic BLS temporarily discontinued the use of physical Information Booklets, the rate of Information book usage was below 4 percent for all waves in 2020q2 (Table 3.2).

  • Since the physical Information Booklets became available again and in-person interviews resumed, Information Booklet use has continued to rise across all interview waves (Table 3.2).

  • Information Booklet use among Wave 1 respondents recovered the most from the series low point in 2020q2, increasing 33.3 percentage points to 35.9 percent in 2023q1 (Table 3.2).

Table 3.2 Prevalence of Information Booklet use among Interview Survey respondents
Quarter Wave Number of respondents Used Did not use[8] Missing response

2020q2

Wave 1 965 2.6 1.8 0.8

2020q2

Wave 2 & 3 2,559 2.9 1.8 0.3

2020q2

Wave 4 1,334 3.4 0.8 0.5

2020q3

Wave 1 1,143 6.7 2.4 1.4

2020q3

Wave 2 & 3 2,444 4.8 2.7 0.3

2020q3

Wave 4 1,393 5.2 2.1 0.4

2020q4

Wave 1 1,230 12.4 6.7 0.3

2020q4

Waves 2 & 3 2,589 9.4 3.6 0.5

2020q4

Wave 4 1,386 7.4 3.8 0.2

2021q1

Wave 1 1,250 13.3 6.2 0.6

2021q1

Waves 2 & 3 2,515 9.3 3.3 0.4

2021q1

Wave 4 1,350 8.5 4.2 0.7

2021q2

Wave 1 1,325 14.9 7.8 0.6

2021q2

Waves 2 & 3 2,534 11.1 7.0 0.7

2021q2

Wave 4 1,337 9.6 5.2 0.6

2021q3

Wave 1 1,352 19.3 11.7 1.0

2021q3

Waves 2 & 3 2,488 12.7 7.4 0.8

2021q3

Wave 4 1,281 10.8 7.2 0.8

2021q4

Wave 1 1,229 25.1 9.3 0.8

2021q4

Waves 2 & 3 2,450 17.3 7.6 0.4

2021q4

Wave 4 1,223 15.3 6.1 0.7

2022q1

Wave 1 1,347 26.9 9.8 0.5

2022q1

Waves 2 & 3 2,551 18.8 8.2 0.4

2022q1

Wave 4 1,289 19.1 7.1 0.5

2022q2

Wave 1 1,325 31.2 10.5 1.1

2022q2

Waves 2 & 3 2,532 22.0 8.7 0.8

2022q2

Wave 4 1,320 20.5 8.6 0.5

2022q3

Wave 1 1,277 34.3 7.0 2.1

2022q3

Wave 2 & 3 2,153 24.1 6.9 1.1

2022q3

Wave 4 1,150 22.8 6.3 1.0

2022q4

Wave 1 1,234 32.3 8.5 2.4

2022q4

Wave 2 & 3 2,258 25.4 8.3 1.7

2022q4

Wave 4 1,125 23.7 6.7 1.0

2023q1

Wave 1 1,288 35.9 8.9 1.6

2023q1

Wave 2 & 3 2,400 28.5 7.7 0.8

2023q1

Wave 4 1,119 26.5 8.2 0.7

4. Expenditure edit rates (Diary and Interview Surveys)

The Expenditure edit rates metric measures the proportion of reported expenditure data that are edited. These edits are changes made to the reported expenditure data during CE data processing, excluding changes due to time period conversion calculations and top-coding or suppression of reported values. Top-coding and suppression are done to protect respondent confidentiality in the public-use microdata. More information on these concepts is available on the CE Website.

The Interview Survey expenditure edit rates are broken down into three categories: Imputation, Allocation, and Manual Edits:

  • Imputation replaces missing or invalid responses with a valid value.

  • Allocation edits are applied when respondents provide insufficient detail to meet tabulation requirements. For example, if a respondent provides a non-itemized total expenditure report for the category of fuels and utilities, that total amount will be allocated to the target items mentioned by the respondent (such as natural gas and electricity).

  • Manual edits occur whenever responses are directly edited by BLS economists based on their analysis and expert judgment.

The Diary survey expenditure edit rates are only broken down into two categories: Allocations and Other Edits. Most edits in the Diary survey are allocations. Table 4.1 below shows the "other edits" category, which covers all other expenditure edits including imputation and manual edits. We can see from the data that these edits are relatively rare.

Beginning in 2022 the DQP team made a change to the way expenditure edit rates are measured in the Diary survey data, as changes to the alcohol cost flag are now considered an expenditure edit. This change was applied to the full metric series and has led to comparatively higher estimates for "Other Edits" and lower estimates for "Unedited" compared to previous reports.

CE data imputation results from expenditure item nonresponse. Allocation is a consequence of responses lacking the required details for items asked by the survey. Lower edit rates are preferred, as it lowers the risk of processing error. However, edits based on sound methodology can improve the completeness of the data, and thereby reduce the risk of measurement error and nonresponse bias in survey estimates. Additional information on expenditure edits is available in the DQP Reference Guide (Armstrong, Jones, Miller, and Pham, 2023).

Diary Survey Summary

  • A recent increase in CE’s sample size beginning in January 2020 resulted in the number of reported expenditures rising by over 22,000, but as response rates dropped in 2020q2, so did the number of expenditures (Table 4.1).[9]

  • The total rate of unedited expenditure amounts fell 3.0 percentage points from 90.5 percent in 2020q1 to 87.5 percent in 2022q4 (Table 4.1).

  • This decrease in unedited expenditures can be explained by a 2.0 percentage point increase in Other Edits from 0.2 in 2020q1 to 2.2 in 2022q4 (Table 4.1).

  • Allocation rates have fluctuated during the period between 2020q1 and 2022q4 but have had a net increase of 1.1 percentage points from 9.2 percent in 2020q1 to 10.3 percent in 2022q4 (Table 4.1).

Table 4.1 Diary Survey: Reported Expenditure Records
Quarter Number of Expenditures Allocated Other Edit Unedited

2020q1

102,693 9.2 0.2 90.5

2020q2

41,257 10.2 0.2 89.5

2020q3

56,071 11.6 0.1 88.3

2020q4

69,959 10.7 0.2 89.2

2021q1

72,138 10.9 0.2 89.0

2021q2

80,646 11.1 0.3 88.5

2021q3

75,663 11.3 0.5 88.2

2021q4

71,144 10.1 1.0 88.9

2022q1

82,352 10.1 0.6 89.4

2022q2

79,454 10.5 0.4 89.1

2022q3

83,957 10.9 1.2 87.9

2022q4

74,215 10.3 2.2 87.5

Interview Survey Summary

  • The total rate of unedited expenditure amounts increased 3.7 percentage points from 83.6 percent in 2020q2 to 87.3 percent in 2023q1 (Table 4.2).

  • This was primarily driven by allocation rates declining 3.3 percentage points from 11.9 percent in 2020q2 to 8.6 percent in 2023q1 (Table 4.2).

  • Declines in allocation rates were partially offset by increases in the manual edit rate from 0.1 percent in 2020q2 to 0.4 percent in 2023q1 (Table 4.2).

Table 4.2 Interview Survey: Reported Expenditure Records
Quarter Number of Expenditures Allocated Imputed Imputed & Allocated Manual Edit Unedited

2020q2

217,785 11.9 4.1 0.2 0.1 83.6

2020q3

224,639 11.6 4.3 0.2 0.3 83.6

2020q4

232,195 11.6 4.3 0.2 0.3 83.6

2021q1

231,850 11.2 3.9 0.2 0.6 84.0

2021q2

232,282 10.1 4.5 0.2 0.2 85.0

2021q3

231,351 10.1 4.0 0.2 0.5 85.2

2021q4

222,027 9.8 3.7 0.2 0.6 85.7

2022q1

231,495 9.4 3.6 0.2 0.5 86.4

2022q2

229,608 9.3 3.8 0.2 0.5 86.3

2022q3

215,674 9.2 3.7 0.1 0.5 86.5

2022q4

213,369 9.1 3.7 0.2 0.4 86.6

2023q1

226,199 8.6 3.5 0.1 0.4 87.3

5. Income imputation rates (Diary and Interview Surveys)

The Income imputation rates metric describes edits performed on a consumer unit's (CUs) nonresponse to at least one source of income. This edit is based on three imputation methods, applicable to both CE Surveys:

  1. Model-based imputation: when the respondent mentions receipt of an income source but fails to report the amount.
  2. Bracket response imputation: when the respondent mentions receipt of an income source, but only reports that income as falling within a specified range.
  3. All valid blank (AVB) conversion: when the respondent reports no receipt of income from any source, but the CE imputes receipt from at least one source.

After imputation, income from each component source is summed to compute total income before taxes.  In the following text, income before taxes is defined as “unimputed income” if no source of total income required imputation for one of the three reasons identified above. As stated, this applies to both the Diary and Interview Surveys.

Since the need for imputation reflects either item nonresponse or that insufficient item detail was provided (e.g., providing a range of income like "between $40,000 and $50,000" that offer little detail), lower imputation rates are desirable for lowering measurement error. However, imputation based on sound methodology can improve the completeness of the data and reduce the risk of nonresponse bias due to dropping incomplete cases from the dataset. Further details on the income imputation methodology can be found in the DQP Reference Guide (Armstrong, Jones, Miller, and Pham, 2023) and the User's Guide to Income Imputation in the CE (Paulin, Reyes-Morales, and Fisher, 2018).

Diary Survey Summary

  • The rate of unimputed total income before taxes increased 1.1 percentage points from 55.5 percent in 2020q1 to 56.6 percent in 2022q4 (Table 5.1).

  • Model-based imputation rates increased 0.6 percentage points, 17.5 percent to 18.1 percent, from 2020q1 to 2022q4 (Table 5.1).

  • Increases in both unedited and model-based imputation rates coincided with the decreases in bracket imputation from 20.0 percent to 17.6 percent, and both model & bracket imputation rates from 5.1 percent to 3.7 percent (Table 5.1).

Table 5.1 Diary Survey: income imputation rates for total amount of family income before taxes
Quarter Number of respondents Valid blanks converted (AVB) Bracket imputation Model imputation Model & bracket Unedited

2020q1

3,285 1.9 20.0 17.5 5.1 55.5

2020q2

1,936 1.5 20.8 16.5 6.2 55.0

2020q3

2,559 2.6 18.1 19.5 6.7 53.1

2020q4

2,835 1.9 18.9 19.9 6.0 53.3

2021q1

2,952 2.0 18.7 18.4 5.6 55.2

2021q2

3,224 2.1 17.5 19.9 5.6 54.9

2021q3

3,027 2.5 19.3 18.4 5.3 54.5

2021q4

2,864 2.4 17.8 22.4 4.6 52.8

2022q1

3,357 2.3 19.0 19.5 4.5 54.7

2022q2

3,239 2.3 18.7 18.9 4.4 55.8

2022q3

3,248 1.8 17.6 19.4 6.1 55.1

2022q4

2,865 3.9 17.6 18.1 3.7 56.6

Interview Survey Summary

  • The rate of unimputed total income before taxes increased from 57.1 percent to 59.5 percent from 2020q2 to 2023q1 (Table 5.2).

  • Model-based imputation rates reach a series low of 16.4 percent in 2023q1 from 18.7 percent in 2020q2 (Table 5.2).

  • The rate of both model-based & bracket imputations also reached a three-year low of 4.4 percent in 2023q1 (Table 5.2).

  • The previously presented lows for model-based only and both model-based & bracket type imputation rates coincided with the increase in unedited records (Table 5.2).

Table 5.2 Interview Survey: income imputation rates for total amount of family income before taxes
Quarter Number of respondents Valid blanks converted (AVB) Bracket imputation Model imputation Model & bracket Unedited

2020q2

4,858 1.2 18.1 18.7 4.9 57.1

2020q3

4,980 1.1 18.2 19.0 5.1 56.6

2020q4

5,205 1.3 18.2 20.3 5.5 54.7

2021q1

5,115 1.4 17.8 19.9 5.5 55.5

2021q2

5,196 1.3 17.4 20.5 5.8 55.0

2021q3

5,121 1.2 18.1 19.7 5.4 55.5

2021q4

4,902 1.4 17.1 18.6 5.3 57.5

2022q1

5,187 1.3 17.8 17.9 5.2 57.8

2022q2

5,177 1.4 17.0 18.3 5.4 58.0

2022q3

4,580 1.1 17.9 17.4 5.3 58.3

2022q4

4,617 1.0 18.3 17.7 4.9 58.1

2023q1

4,807 1.1 18.5 16.4 4.4 59.5

6. Respondent burden (Interview Survey)

Respondent burden in the Interview survey relates to the perceived level of effort exerted by respondents in answering the survey question. Survey designers are concerned about respondent burden as it has the potential to negatively impact response rates and overall response quality. Beginning in April 2017, the Interview Survey introduced a respondent burden question with response options describing five different levels of burden at the end of the Wave 4 interview. The respondent burden metric is derived from this question and maps five burden categories to three metric values: not burdensome, some burden, and very burdensome. Please see the DQP Reference Guide (Armstrong, Jones, Miller, and Pham, 2023) for more details on the question wording and the burden categories.

A caveat to the interpretation of this metric is that since the burden question is only asked at the end of Wave 4, the metric may underestimate survey burden due to self-selection bias. That is, respondents who have agreed to participate in the final wave of the survey presumably find the survey less burdensome than sample units who had dropped out at any point prior to completing the final survey wave.

However, it is also possible that the respondent answering this question did not participate in prior interview waves. For example, the respondent who participated in the first three survey waves might move out of the sampled address prior to the final interview. This is not a common occurrence, but if someone else moves into the sampled address in time for the final wave, then they would be asked these questions.

Interview Survey Summary

  • In 2023q1, 28.7 percent of respondents report perceiving no burden, which is a decrease of 2.0 percentage points from 30.7 percent in 2020q2 (Table 6.1).

  • The percent of respondents who report perceiving some burden increased from 54.3 percent in 2020q2 to 55.5 percent in 2023q1, which is consistent with the drop in the percentage points of respondents who perceive no burden (Table 6.1).

  • From 12.5 percent in 2020q2, the percentage of respondents who report that the survey was very burdensome increased to 13.1 percent in 2023q1 (Table 6.1).

Table 6.1 Interview Survey: respondents' perceived burden in the final survey wave
Quarter Number of respondents Not burdensome Some burden Very burdensome Missing response

2020q2

1,334 30.7 54.3 12.5 2.5

2020q3

1,393 30.5 54.1 12.8 2.7

2020q4

1,386 29.7 53.5 14.9 1.9

2021q1

1,350 26.0 55.0 15.6 3.4

2021q2

1,337 29.0 55.8 12.3 2.9

2021q3

1,281 27.9 53.9 15.4 2.7

2021q4

1,223 24.2 57.9 15.3 2.6

2022q1

1,289 26.3 55.2 16.3 2.2

2022q2

1,320 28.4 54.7 14.6 2.3

2022q3

1,150 27.1 57.1 13.4 2.3

2022q4

1,125 28.3 54.8 15.0 1.9

2023q1

1,119 28.7 55.5 13.1 2.7

7. Survey mode (Diary and Interview Surveys)

These metrics measure the mode of data collection for the Diary and the Interview Surveys.

In the Diary Survey, there are two dimensions to the 'mode' of data collection. The first measures how data about the household (e.g., household size, demographics characteristics, income and assets, etc.) were collected by the Census Field Representative (i.e., mostly in-person or mostly over the phone). The second measures the diary form used by respondents when entering expenses during the diary keeping period (i.e., online or paper). Until recently, the Diary Survey was administered strictly in paper form. As part of the CE program’s redesign effort, a new online diary mode was introduced.[10] This new mode prompted the inclusion of a quality metric that tracks the mode of diary chosen by the respondent at the time of placement. It should be noted that while the online diary became available in July 2020 as a supplemental data collection tool during the onset of the COVID-19 pandemic, it was not officially implemented into CE production until July 2022.

The Interview Survey was designed to be conducted in-person. However, the interviewer can also collect data over the phone, or by a combination of the two modes. Higher rates of in-person data collection are preferred since the interviewer can actively prompt the respondent, as well as encourage the use of recall aids, thereby reducing the risk of measurement error. Conducting first wave interviews in-person is especially important as this is typically the respondent’s first exposure to the survey. This serves as an opportunity for the Census FR to build rapport with the household. Additionally, BLS has agreements with the Census Bureau that, whenever possible, no more than 24 percent of first interviews or 48 percent of subsequent interviews will be collected over the phone. More information on how we calculate the mode metrics can be found in the DQP Reference Guide (Armstrong, Jones, Miller, and Pham, 2023).

Diary Survey Mode Summary

  • The rate of in-person collection for diary household data remained a fraction of a percentage point below 70 for the last three quarters of available data (Table 7.1). This followed a two-year cumulative increase of 68.7 percentage points between 2020q2 to 2022q2, which occurred after a drop to near zero in 2020q2 during the onset of the COVID-19 pandemic (Table 7.1).

  • After the early pandemic period in 2020, the rate of in-person data collection increased from 46.5 percent in 2021q1 to 69.9 percent in 2022q4 (Table 7.1).

Table 7.1 Diary Survey: Interview Phase Mode
Quarter Number of Diary Cases In-Person Telephone Missing

2020q1

3,285 76.3 20.8 3.0

2020q2

1,936 0.9 97.2 1.9

2020q3

2,559 24.5 73.6 1.9

2020q4

2,835 43.8 53.1 3.1

2021q1

2,952 46.5 50.3 3.1

2021q2

3,224 59.6 36.7 3.7

2021q3

3,027 64.6 32.9 2.5

2021q4

2,864 60.8 32.8 6.4

2022q1

3,357 63.1 32.7 4.3

2022q2

3,239 69.6 25.0 5.4

2022q3

3,248 65.9 26.5 4.0

2022q4

2,865 69.9 26.0 4.1

Expenditure Diary Survey Mode Summary

  • In the two most recent quarters, 2022q3 and 2022q4, the proportion of paper diaries rose to 71.4 percent in 2022q3 from 68.4 percent the previous quarter and remained there in 2022q4. (Table 7.2).

  • The proportion of online diaries on the other hand fell to 25.5 percent in 2022q3, and 25.4 percent in 2022q4 (Table 7.2).


Table 7.2 Diary Survey: survey mode
Quarter Number of Diary Cases Paper Online Missing

2020q3

2,559 66.3 33.1 0.6

2020q4

2,835 71.3 26.8 1.9

2021q1

2,952 71.2 27.1 1.6

2021q2

3,224 70.8 27.1 2.1

2021q3

3,027 70.5 27.9 1.6

2021q4

2,864 69.6 26.0 4.3

2022q1

3,357 69.1 27.8 3.2

2022q2

3,239 68.4 28.7 2.9

2022q3

3,248 71.4 25.5 3.1

2022q4

2,865 71.4 25.4 3.2

Interview Survey Summary

  • The rate of in-person interviews has remained between 35.0 – 37.5 percent, across all waves, for the past 4 quarters of available data (Table 7.3).

  • This continues the post-pandemic trend of intermittent increases in the proportion of in-person interviews, followed by periods of stagnation (Table 7.3).

  • The previous stall in improvement occurred mostly between 2021q4 and 2022q1, when the proportion of in-person interviews stayed below 31.0 percent, after rising from 18.1 percent in 2021q1 to 31.8 percent in 2021q3 (Table 7.3).

  • When comparing rates of in-person interviews by wave, it is evident that Wave 1 has seen the greatest growth since the drop to near zero during the onset of the COVID-19 pandemic in early 2020 (Table 7.4).

  • After reaching a low of 1.5 percent in 2020q2, the rate of Wave 1 in-person interviews increased to 50.3 percent in 2022q2 and has since hovered between 49.3 and 46.8 percent (Table 7.4).

  • Rates of in-person interviews in Waves 2 & 3 also saw growth after the low point of 1.8 percent in 2020q2, rising to 34.3 percent by 2023q1 (Table 7.4). 

  • Wave 4 in-person interview rates saw similar growth to Waves 2 & 3, increasing steadily to 31 percent by 2023q1 after the low point of 1.9 percent in 2020q2 (Table 7.4).

Table 7.3 Interview Survey: survey mode
Quarter Number of respondents In-person Telephone Missing

2020q2

4,858 1.7 98.0 0.3

2020q3

4,980 9.3 90.4 0.3

2020q4

5,205 19.5 80.3 0.2

2021q1

5,115 18.1 81.6 0.3

2021q2

5,196 26.3 73.4 0.3

2021q3

5,121 31.8 67.8 0.4

2021q4

4,902 30.7 69.0 0.3

2022q1

5,187 31.0 68.8 0.2

2022q2

5,177 36.4 63.1 0.5

2022q3

4,580 35.9 63.0 1.1

2022q4

4,617 35.3 63.3 1.4

2023q1

4,807 37.5 61.9 0.6
 Table 7.4 Interview Survey: In-Person Interviews by Wave
Quarter Number of respondents Wave 1 Waves 2 & 3 Wave 4

2020q2

4,858 1.5 1.8 1.9

2020q3

4,980 13.0 8.6 7.4

2020q4

5,205 28.9 17.6 14.6

2021q1

5,115 28.7 15.9 12.2

2021q2

5,196 36.7 24.0 20.5

2021q3

5,121 46.1 28.1 24.0

2021q4

4,902 42.6 27.6 25.0

2022q1

5,187 42.1 28.5 24.2

2022q2

5,177 50.3 32.1 30.8

2022q3

4,580 49.3 32.1 28.3

2022q4

4,617 46.8 32.5 28.2

2023q1

4,807 49.1 34.3 31.0

8. Survey Response Time (Diary and Interview Surveys)

In both the Interview and Diary Surveys, survey response time is defined as the number of minutes needed to complete an interview. For the Diary Survey, the survey response time metric is the median number of minutes to complete the personal interview component that collects household information on income and demographics. For the Interview Survey, the survey response time metric is the median number of minutes to complete the interview. In the Interview Survey, wave 1 & 4 interviews are typically longer because they collect additional information, like household demographics or assets and liabilities. Survey response time is used in CE as an objective indicator for respondent burden: presumably, the longer the time needed to complete the survey, the more burdensome the survey. Fricker, Gonzalez, and Tan (2011) find that higher respondent burden negatively affects both response rates and data quality. However, survey response time could also reflect the respondent's degree of engagement. Engaged and conscientious respondents might take longer to complete the survey because they report more thoroughly or use records more extensively. Regardless, tracking the median survey response time can be useful for assessing the effect of changes in the survey design.

Diary Survey Summary

  • Median Diary Survey response time rose by just 1.1 minutes in the past three years, from 33.3 in 2020q1 to 34.4 in 2022q4, but the metric did experience some variation throughout the period (Table 8.1).

  • Median response time fluctuated between 32.4 and 35.1 minutes from 2020q1 to 2022q2, before jumping to 38.0 minutes in 2022q3, and then falling back to 34.4 in 2022q4 (Table 8.1).

Table 8.1 Diary Survey: median length of time to complete the interview components (income and demographics)
Quarter Number of Diary Cases Minutes

2020q1

3,281 33.3

2020q2

1,936 34.9

2020q3

2,559 34.9

2020q4

2,835 32.6

2021q1

2,952 32.7

2021q2

3,224 32.9

2021q3

3,027 32.4

2021q4

2,864 34.9

2022q1

3,357 34.4

2022q2

3,239 35.1

2022q3

3,248 38.0

2022q4

2,865 34.4

Interview Survey Summary

  • Median interview time has trended upward across all waves since 2020q2 (Table 8.2).

  • Median time for Wave 1 interviews fluctuated over the past three years between 76.4 and 83.9 minutes (Table 8.2).

  • In the last three years, median time to complete Waves 2 and 3 interviews ranged between 54.6 and 62.5 minutes (Table 8.2).

  • Median time for Wave 4 interviews ranged between 62.2 and 67.7 (Table 8.2).

  • In 2022q3 median interview times rose above the normal range for all waves, following the implementation of Computer Assisted Recorded Interviewing (CARI). This was expected as a similar jump in median time occurred during the pretest of CARI for 4th wave interview participants in 2021q4 (Table 8.2).

  • Wave 1 and Wave 4 median interview times have decreased following the initial implementation of CARI, but have continued to rise for Wave 2 & 3 interviews (Table 8.2).

Table 8.2 Interview Survey: median length of time to complete survey
Quarter Number of respondents Wave 1 Waves 2 & 3 Wave 4

2020q2

4,855 76.4 54.6 62.2

2020q3

4,980 76.8 56.7 62.2

2020q4

5,205 75.0 56.2 60.4

2021q1

5,115 74.4 54.6 61.7

2021q2

5,196 76.7 54.6 58.8

2021q3

5,121 78.0 54.6 60.0

2021q4

4,902 80.2 57.8 69.5

2022q1

5,187 79.6 57.7 62.8

2022q2

5,177 79.2 57.7 63.1

2022q3

4,580 88.5 61.8 69.2

2022q4

4,617 84.1 62.0 68.2

2023q1

4,807 83.9 62.5 67.7

Summary

BLS is committed to producing data that are consistently of high statistical quality. As part of that commitment, BLS publishes the DQP and its accompanying Reference Guide (Armstrong, Jones, Miller, and Pham, 2023) to assist data users as they evaluate CE data quality metrics and judge whether CE data fit their needs. DQP metrics therefore cover both the Interview and Diary Surveys, multiple dimensions of data quality, and several stages of the survey lifecycle. Additionally, BLS uses these metrics internally to identify areas for potential survey improvement, evaluate the effects of survey changes, and to monitor the health of the surveys.

Response rates for the Diary Survey have improved from a low of 26.1 percent in early 2020 but have continued the general downward trend that was visible prior to the onset of the COVID-19 pandemic. Likewise, response rates in the Interview Survey have continued the overall trend of decline, though the noticeable drop in recent quarters was partially attributable to cost saving measures employed by Census during that time period.

Despite this continued downturn in overall response rates, quality metric trends relating to the administration and processing of the CE Surveys have yielded more encouraging results.

Regarding survey administration, the rates of records use (Interview Survey), information booklet use (both CE Surveys), and in-person Wave 1 interviews have all trended upward over recent quarters, while online diary rates and median Interview Survey time have fallen. Record use in particular, which has been shown to be a helpful tool for improving data quality in past CE research, has recorded increases across all Interview Survey waves (Wilson, T. J., 2017).

With respect to survey processing, the percentage of allocations and imputations in the Interview Survey expenditure edits have continued to decrease, while allocation rates in the Diary Survey varied little. The rate of income imputations experienced little variation for both CE Surveys over the time period covered, although both saw a slight uptick in the proportion of unedited incomes.

BLS will continue to monitor these trends, and the next issue of the CE Data Quality Profile will be released in May of 2024 concomitant with BLS’s midyear release of 2023 CE data. This report will feature CE Diary Survey data through 2023q2 and CE Interview Survey data through 2023q3.

References

Abdirizak, S., Erhard, L., Lee, Y., & McBride, B. (2017). Enhancing Data Quality Using Expenditure Records. Paper Presented at the Annual Conference of the American Association for Public Opinion Research, New Orleans, LA.

Armstrong, G., G. Jones, T. Miller, and S. Pham (2023). CE Data Quality Profile Reference Guide. Program Report Series, the Consumer Expenditure Surveys. Bureau of Labor Statistics.

Ash, S., B. Nix, and B. Steinberg (2022). Report on Nonresponse Bias during the COVID-19 Period for the Consumer Expenditures Interview Survey. Published as part of the Consumer Expenditure Surveys Program Report Series. Bureau of Labor Statistics.

Elkin, I., B. McBride, and B. Steinberg (2018). Results from the Incentives Field Test for the Consumer Expenditure Survey Interview Survey. Published as part of the Consumer Expenditure Surveys Program Report Series. Bureau of Labor Statistics.

Fricker, S., Gonzalez, J., & Tan, L. (2011). Are you burdened? Let's find out. Paper Presented at the Annual Conference of the American Association for Public Opinion Research, Phoenix, AZ.

Paulin, G., Reyes-Morales, S., & Fisher, J (2018). User's Guide to Income Imputation in the CE. U.S. Bureau of Labor Statistics.

Wilson, T. J. (2017). The Impact of Record Use in the CE Interview Survey. CE Survey Methods Symposium. Bureau of Labor Statistics.

Footnotes


[1] The Office of Management and Budget has oversight over all Federal surveys and provides the rules under which they operate.  See the Federal Register notice for more details.

[2] Standard errors are also available in the CE LABSTAT database, as of 2022.
[3] Instructions on using the CE PUMD to create variables and flags for quality analysis can be found in the CE PUMD Getting Started Guide.
[4] More information may be found on the CE Frequently Asked Questions (FAQ) page.
[5] Work from Ash, Nix and Steinberg on nonresponse bias during the COVID-19 pandemic for both the CE Interview Survey and the CE Diary Survey can be accessed in the CE Library. 
[6] The Diary Survey’s sample size increased in 2020q1 to support the Consumer Price Index’s Commodities and Services Survey sample frame.
[7] The In the Interview Survey, each family in the sample is interviewed every 3 months over four calendar quarters. These interviews are commonly referred to as waves. For more information on survey administration please see the CE handbook of methods.
[8] The This “Did not use” category does not include records where there was no Information Booklet available.
[9] This increase in sample size was made possible by increased funding to accommodate collection of outlet information needed for calculating the Consumer Price Index.
[10] The Gemini Project was launched to research and develop a redesign of the Consumer Expenditure (CE) surveys, addressing issues of measurement error and respondent burden.

Last Modified Date: December 13, 2023