Department of Labor Logo United States Department of Labor
Dot gov

The .gov means it's official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you're on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Consumer Expenditure Surveys

The CE Midyear Data Quality Profile — 2023

Authors: Grayson Armstrong, Gray Jones, Tucker Miller, and Laura Petit

This paper was published as part of the Consumer Expenditure Surveys Program Report Series.

Table of Contents

Overview
Highlights
1. Final disposition rates of eligible sample units (Diary and Interview Surveys)
2. Records Use (Interview Survey)
3. Information Booklet use (Diary and Interview Surveys)
4. Expenditure edit rates (Diary and Interview Surveys)
5. Income imputation rates (Diary and Interview Surveys)
6. Respondent burden (Interview Survey)
7. Survey mode (Diary and Interview Surveys)
8. Survey Response Time (Diary and Interview Surveys)
Summary
References

 

Overview

The Bureau of Labor Statistics (BLS) is committed to producing consistently high quality data (i.e., accurate, objective, relevant, timely, and accessible) in accordance with Statistical Policy Directive No. 1.[1] This Directive, issued by the Office of Management and Budget, affirms the fundamental responsibilities of Federal Statistical Agencies, and recognized statistical units in the design, collection, processing, editing, compilation, storage, analysis, release, and dissemination of statistical information. The BLS Consumer Expenditure Surveys (CE) program provides data users with a variety of resources to assist them in analyzing overall CE data quality. CE data users can evaluate quality on their own by utilizing the following:

In addition, the Data Quality Profile (DQP) provides a comprehensive set of quality metrics that are timely, routinely updated, and accessible to users. For data users, DQP metrics are an indication of quality for both the Interview Survey and the Diary Survey.[4] For internal stakeholders, these metrics signal areas for improvements to the surveys.

This DQP includes, for each metric, a brief description of the metric, along with accompanying results, which are tabulated and graphed. The DQP Reference Guide gives detailed descriptions of the metrics, computations, and methodology (Armstrong, Jones, Miller & Pham 2023). The intention of the DQP report series is to highlight recent trends that may impact CE data quality, and for this purpose, the DQP reports cover the three most recent years of available data.

Prior DQPs are available on the CE Library Page. BLS began publishing annual DQPs beginning with 2017 data, though prototype DQPs are available for 2013 and 2015. Midyear DQPs started with the 2020 midyear data release, which covered the period of July 2019 through June 2020.

The data quality metrics are reported in a quarterly format, where the quarter represents the three-month period in which the survey data were collected. Because Interview Survey respondents are asked to recall expenditures from the prior three months, the data collected in 2023q3 include expenditures made in 2023q2. For example, an interview conducted in July 2023 would include expenditures from April, May, and June of 2023. In contrast, respondents to the Diary Survey report expenditures on the days that they were incurred in the two-week diary keeping period. This is why this report’s Interview Survey metrics appear to be “ahead” of the Diary Survey by a quarter (e.g., 2023q3 for the Interview Survey and 2023q2 for the Diary Survey).


Highlights

In this section, we highlight noteworthy metric trends from the past three years. This time frame covers the first quarter of 2020 to the fourth quarter of 2022 for the CE Diary survey, and the second quarter of 2020 to the first quarter of 2023 for the CE Interview survey. Subsequent sections will describe all the individual metrics with detailed data tables.

Recent Trends of Note

  • Diary Survey response rates have trended downward over the past five quarters of available data, from 43.9 percent in 2022q1 to 40.2 percent in 2023q2 (Table 1.1).

  • Interview Survey other nonresponse rates in 2022q3 differ from other quarters under study, due to the Census Bureau reducing contact attempts as a part of cost-cutting measures in place during August and September. As a result, Interview Survey response rates dropped by 5.4 percent in one quarter, from 46.2 percent in 2022q2 to 40.8 percent in 2022q3 (Table 1.3)

  • Allocation rates in the Diary Survey fell from 10.3 percent in 2022q4 to 7.3 percent in 2023q2, which is explained by the consolidation of many Universal Classification Codes (UCCs) at the beginning of 2023 (Chart 4.1).[5]

  • Rates of complete in-person interviews across all waves rose above 40 percent for the first time since the onset of the COVID-19 Pandemic in early 2020. This was driven by recent increases in in-person interviews for Waves 2, 3, and 4 (Table 7.4).

  • Median Interview Survey time for Wave 1 interviews rose approximately 3.8 minutes from 81.0 minutes in 2023q2 to 84.8 minutes in 2023q3 (Table 8.2). (Table 8.2).

  • Overall, the indicators of CE data quality analyzed in this report varied little from the previous Annual Data Quality Profile published alongside the 2022 annual release of CE data.[6]

1. Final disposition rates of eligible sample units (Diary and Interview Surveys)

Final disposition rates of eligible sample units represent the final participation outcomes of field staff's survey recruitment efforts. The BLS classifies the "final outcome" of eligible sample units into the following four main categories:

  1. Completed interview
  2. Nonresponse due to refusal
  3. Nonresponse due to noncontact
  4. Nonresponse due to other reasons

Completed interviews reclassified to a nonresponse by BLS staff are included in the other nonresponse category and are presented in the nonresponse reclassification tables (Tables 1.2 and 1.4). More information on the nonresponse reclassification edit and other nonresponse categories, along with information on how BLS staff calculate response rates can be found in the DQP Reference Guide (Armstrong, Jones, Miller, and Pham, 2023).

The key point of interest for response rates is that low response rates can indicate the potential for nonresponse bias of an expenditure estimate if the cause of nonresponse is correlated with that expenditure category. While recent research on nonresponse bias for both the CE Interview Survey and the CE Diary Survey have not shown significant bias in the CE survey estimates during the COVID-19 pandemic (Ash, Nix, and Steinberg, 2022), BLS continues to monitor this risk.

Response rates in this report are presented as unweighted, because unweighted rates measure the effectiveness of our data collection efforts. When response rates were previously calculated using weights, they showed no meaningful difference from the unweighted rates.          

Diary Survey Summary

  • In March 2020, the Census Bureau temporarily suspended in-person diary placement interviews due to the COVID-19 pandemic. This resulted in declines in response, refusal, and noncontact rates and a massive increase in other nonresponse rates (Table 1.1). Since the start of the three-year period coincides with the onset of the pandemic, when final disposition rates were anomalous, final disposition rates will be compared to 2021q1.[7]

  • Response rates varied between 36.5 percent and 43.9 percent from 2020q4 to 2023q2 (Table 1.1).

  • Refusal rates increased by 5.0 percent from 2020q4 (34.7 percent) to 2023q2 (39.7 percent)(Table 1.1).

    • The refusal rate exceeded the response rate on two separate occasions:  2021q4 and 2022q4 by 1.7 percent and 2.8 percent, respectively (Table 1.1).

  • Noncontact rates have hovered between 7.6 percent and 12.1 percent since 2020q4 (Table 1.1).

  • Overall, other nonresponse rates declined from 18.8 percent in 2020q4 to 9.9 percent in 2023q2 (Table 1.1).

 Table 1.1 Diary Survey: Distribution of final dispositions for eligible sample units (unweighted)
Quarter Number of eligible sample units Interview Refusal Noncontact Other Nonresponse

2020q3

7784 32.9 22.2 7.2 37.7

2020q4

7774 36.5 34.7 10.1 18.8

2021q1

7488 39.4 34.4 7.6 18.6

2021q2

7584 42.5 34.9 8.8 13.8

2021q3

7456 40.7 37.0 11.1 11.2

2021q4

7676 37.3 39.0 11.9 11.8

2022q1

7645 43.9 36.3 9.5 10.3

2022q2

7556 42.9 36.9 9.8 10.4

2022q3

7594 42.8 36.9 9.7 10.7

2022q4

7749 37.0 39.8 12.1 11.1

2023q1

7610 41.6 39.2 10.1 9.1

2023q2

7695 40.2 39.7 10.3 9.9
 Table 1.2 Diary Survey: Prevalence of nonresponse reclassification
Quarter Number of eligible sample units Total reclassifications COVID-19 reclassifications Other reclassifications

2020q3

7,784 250 34 216

2020q4

7,774 248 10 238

2021q1

7,488 374 2 372

2021q2

7,584 353 0 353

2021q3

7,456 348 0 348

2021q4

7,676 387 0 387

2022q1

7,645 362 0 362

2022q2

7,556 377 0 377

2022q3

7,594 348 0 348

2022q4

7,749 345 0 345

2023q1

7,610 308 0 308

2023q2

7,695 347 0 347

Interview Survey Summary

  • Response rates were 4.6 percentage points lower in 2023q3 (41.9 percent) than what was observed in 2020q4 (46.5 percent) (Table 1.3).

  • Due to the Census Bureau reducing contact attempts as a part of cost-cutting measures in place during August and September, other nonresponse rates experienced an anomalous increase of 17.0 percentage points in 2022q3, resulting in a substantial drop in the refusal rate in the same quarter.

  • Outside of this anomaly in 2022q3, refusal rates have fluctuated between 33.9 percent and 44.3 percent since 2020q4. Since 2022q3, refusal rates have been increasing, reaching 39.7 percent in 2023q3 (Table 1.3).

    • In 2021q4, the refusal rate (44.3 percent) exceeded the response rate (43.5 percent) for the first time by 0.8 percentage points. Refusal rates subsequently fell below the response rate and have not exceeded the response rate since. (Table 1.3).

  • Noncontact rates are substantially higher than they were in 2020q4 (6.3 percent), reaching a series high of 23.0 percent in 2022q4. Since then, noncontact rates have declined with 2023q3 returning a nonresponse rate of 16.8 percent (Table 1.3).

  • Excluding the cost cutting period, other nonresponse rates were lower than what was observed in 2020q4 (10.4 percent), falling to 1.7 percent in 2023q3 (Table 1.3).

 Table 1.3 Interview Survey: Distribution of final dispositions for eligible sample units (unweighted)
Quarter Number of eligible sample units Interview Refusal Noncontact Other nonresponse

2020q4

11185 46.5 36.8 6.3 10.4

2021q1

11125 46.0 38.9 6.8 8.3

2021q2

11120 46.7 41.1 9.5 2.7

2021q3

11117 46.1 43.0 8.4 2.5

2021q4

11275 43.5 44.3 9.9 2.3

2022q1

11320 45.8 42.8 9.3 2.1

2022q2

11202 46.2 43.5 8.3 2.0

2022q3

11235 40.8 23.8 16.4 19.0

2022q4

11248 41.0 33.9 23.0 2.0

2023q1

11299 42.5 36.5 20.1 0.9

2023q2

11308 42.0 38.0 18.9 1.1

2023q3

11383 41.9 39.7 16.8 1.7
 Table 1.4 Interview Survey: Prevalence of nonresponse reclassifications
Quarter Number of eligible sample units Total reclassifications COVID-19 reclassifications Other reclassifications

2020q4

11,185 32 14 18

2021q1

11,125 72 2 70

2021q2

11,120 522 0 522

2021q3

11,117 156 0 156

2021q4

11,275 16 0 16

2022q1

11,320 13 0 13

2022q2

11,202 13 0 13

2022q3

11,235 3 0 3

2022q4

11,248 10 0 10

2023q1

11,299 11 0 11

2023q2

11,308 0 0 0

2023q3

11,383 0 0 0

2. Records Use (Interview Survey)

The Records Use metric measures the proportion of respondents who refer to records while answering the Interview Survey questions, as reported by Census Field Representatives (FRs). Examples of records include, but are not limited to: receipts, bills, checkbooks, and bank statements. Records use is retrospectively recorded by the interviewer at the end of the interview. Past research has shown that respondents who use expenditure records report more expenditures with lower rates of missing data (Abdirizak, Erhard, Lee, and McBride, 2017), so a higher prevalence of records use is desirable. Metrics in this section are presented by survey wave.[8]

Interview Survey Summary

  • Since 2021q3, records use across all waves experienced a general upward trend (Graph 2.1).[9]

    • Records use in Wave 1 experienced a three-year high of 60.8 percent in 2023q1 (Table 2.1).

    • Records use in Waves 2 – 3 recorded a series high of 59.8 percent in 2023q1 (Table 2.1).

    • Records use in Wave 4 recorded a series high of 61.7 percent in 2023q1 (Table 2.1).

  • In 2023q2 and 2023q3, record use rates were lower across all waves when compared to 2023q1. It is unclear if this decline in record use rates indicates normal quarterly variation, a leveling off period, or a larger change in the direction of record use (Table 2.1).

    • Wave 4 record use rates experienced the largest decline in series history of 5.3 percent from 2023q2 to 2023q3 (Table 2.1).

 Table 2.1 Interview Survey: Prevalence of records use among respondents
Quarter Wave Number of respondents Used Did not use Missing response

2020q4

Wave 1 1,230 50.1 49.6 0.3

2020q4

Waves 2 & 3 2,589 50.1 49.3 0.5

2020q4

Wave 4 1,386 51.9 47.8 0.2

2021q1

Wave 1 1,250 52 47.4 0.6

2021q1

Waves 2 & 3 2,515 50.3 49.4 0.4

2021q1

Wave 4 1,350 52.4 47 0.7

2021q2

Wave 1 1,325 49.8 49.6 0.6

2021q2

Waves 2 & 3 2,534 47.8 51.4 0.7

2021q2

Wave 4 1,337 50.5 48.9 0.6

2021q3

Wave 1 1,352 53 46.1 1

2021q3

Waves 2 & 3 2,488 48.6 50.6 0.8

2021q3

Wave 4 1,281 49.6 49.6 0.8

2021q4

Wave 1 1,229 54.8 44.4 0.8

2021q4

Waves 2 & 3 2,450 53.2 46.4 0.4

2021q4

Wave 4 1,223 54 45.3 0.7

2022q1

Wave 1 1,347 60.3 39.2 0.5

2022q1

Waves 2 & 3 2,551 53.9 45.7 0.4

2022q1

Wave 4 1,289 56.7 42.7 0.5

2022q2

Wave 1 1,325 55.4 43.5 1.1

2022q2

Waves 2 & 3 2,532 52.6 46.6 0.8

2022q2

Wave 4 1,320 54.4 45.1 0.5

2022q3

Wave 1 1,277 57.6 40.3 2.1

2022q3

Waves 2 & 3 2,153 55.7 43.1 1.1

2022q3

Wave 4 1,150 57 42 1

2022q4

Wave 1 1,234 57.1 40.5 2.4

2022q4

Wave 2 - 3 2,258 55.4 42.9 1.7

2022q4

Wave 4 1,125 59 40 1

2023q1

Wave 1 1,288 60.8 37.6 1.6

2023q1

Wave 2 - 3 2,400 59.8 39.4 0.8

2023q1

Wave 4 1,119 61.7 37.6 0.7

2023q2

Wave 1 1,263 56.5 42.2 1.3

2023q2

Wave 2 - 3 2,369 57.9 41.7 0.5

2023q2

Wave 4 1,119 61.4 38 0.6

2023q3

Wave 1 1,246 57.7 40.9 1.4

2023q3

Wave 2 - 3 2,314 57.6 41.8 0.6

2023q3

Wave 4 1,210 56.1 43.4 0.5

3. Information Booklet use (Diary and Interview Surveys)

The Information Booklet is a recall aid that the Census FR provides to respondents, which covers both the Interview and Diary surveys. The Information Booklet provides response options for demographic questions and bracket response options for income questions. Additionally, Survey respondents can use the Information Booklet to view clarifying examples for specific expenditures that each section/item code is intended to collect.

This metric identifies the prevalence of Information Booklet use among respondents during their interviews, according to Census FRs. Typically, for interviews conducted over the phone, the Information Booklet is not readily available to the respondent (although a PDF version is available on the BLS website). Thus, this metric should be interpreted in conjunction with the rise in telephone interviews during the COVID-19 pandemic. Higher rates of Information Booklet usage are encouraged, as use can improve reporting quality by clarifying concepts and providing examples.

Diary Survey Summary

  • After the reintroduction of in-person interviewing in 2020q3, the percentage of CUs that used the Information Booklet has increased every quarter until plateauing in 2022q3 and dropping off slightly in 2023q2. Usage has yet to recover to its pre-pandemic level from 2020q1 (Graph 3.1).

  • The prevalence of Information Booklet use among Diary Survey respondents increased 19 percentage points from 7.3 percent in 2020q3 to 26.5 percent in 2023q2 (Table 3.1).

  • While there was an upward trend in Information Booklet use among Diary Survey respondents between 2020q3 and 2022q3, Information Booklet use rates stagnated in 2022q4 and declined in 2023q2.

 Table 3.1 Diary Survey: Prevalence of Information Booklet use among respondents
Quarter Number of respondents Used Did not use Missing response

2020q3

2,559 7.3 90.8 1.9

2020q4

2,835 10.5 86.4 3.1

2021q1

2,952 12.7 84.2 3.1

2021q2

3,224 16.7 79.6 3.7

2021q3

3,027 20.0 77.5 2.5

2021q4

2,864 22.2 71.3 6.4

2022q1

3,357 25.9 69.8 4.3

2022q2

3,239 26.8 67.7 5.5

2022q3

3,248 27.9 68.1 4.0

2022q4

2,865 27.8 68.1 4.1

2023q1

3,162 27.9 67.9 4.2

2023q2

3,095 26.5 69.7 3.8

Interview Survey Summary

  • Due to the COVID-19 pandemic, BLS temporarily discontinued the use of physical Information Booklets, before reintroducing them in 2020q3 as in-person interviews resumed (Graph 3.2).

  • This discontinuation led to the rate of Information Booklet use being below 15 percent for all waves in 2020q4 (Table 3.2).

  • Since physical Information Booklets became available again and in-person interviews resumed, Information Booklet use has continued to rise across all interview waves (Table 3.2)

  • Information Booklet use among Wave 1 respondents recovered the most since 2020q4, increasing 23.8 percentage points to 36.2 percent in 2023q3 (Table 3.2).

  • In 2022q4 there was a 2 percentage point drop in Information Booklet use among Wave 1 respondents, but this drop reversed in 2023q1 with a 3.6 percentage point increase (Table 3.2).

Table 3.2 Interview Survey: Prevalence of Information Booklet use among respondents
Quarter Wave Number of respondents Used Did not use[10] Missing response

2020q4

Wave 1 1,230 12.4 6.7 0.3

2020q4

Waves 2 & 3 2,589 9.4 3.6 0.5

2020q4

Wave 4 1,386 7.4 3.8 0.2

2021q1

Wave 1 1,250 13.3 6.2 0.6

2021q1

Waves 2 & 3 2,515 9.3 3.3 0.4

2021q1

Wave 4 1,350 8.5 4.2 0.7

2021q2

Wave 1 1,325 14.9 7.8 0.6

2021q2

Waves 2 & 3 2,534 11.1 7.0 0.7

2021q2

Wave 4 1,337 9.6 5.2 0.6

2021q3

Wave 1 1,352 19.3 11.7 1.0

2021q3

Waves 2 & 3 2,488 12.7 7.4 0.8

2021q3

Wave 4 1,281 10.8 7.2 0.8

2021q4

Wave 1 1,229 25.1 9.3 0.8

2021q4

Waves 2 & 3 2,450 17.3 7.6 0.4

2021q4

Wave 4 1,223 15.3 6.1 0.7

2022q1

Wave 1 1,347 26.9 9.8 0.5

2022q1

Waves 2 & 3 2,551 18.8 8.2 0.4

2022q1

Wave 4 1,289 19.1 7.1 0.5

2022q2

Wave 1 1,325 31.2 10.5 1.1

2022q2

Waves 2 & 3 2,532 22.0 8.7 0.8

2022q2

Wave 4 1,320 20.5 8.6 0.5

2022q3

Wave 1 1,277 34.3 7.0 2.1

2022q3

Wave 2 & 3 2,153 24.1 6.9 1.1

2022q3

Wave 4 1,150 22.8 6.3 1.0

2022q4

Wave 1 1,234 32.3 8.5 2.4

2022q4

Wave 2 & 3 2,258 25.4 8.3 1.7

2022q4

Wave 4 1,125 23.7 6.7 1.0

2023q1

Wave 1 1,288 35.9 8.9 1.6

2023q1

Wave 2 & 3 2,400 28.5 7.7 0.8

2023q1

Wave 4 1,119 26.5 8.2 0.7

2023q2

Wave 1 1,263 35.2 8.7 1.3

2023q2

Wave 2 & 3 2,369 27.9 8.4 0.5

2023q2

Wave 4 1,119 26.7 9.3 0.6

2023q3

Wave 1 1,246 36.2 8.3 1.4

2023q3

Wave 2 & 3 2,314 28.3 8.6 0.6

2023q3

Wave 4 1,210 26.5 8.3 0.5

4. Expenditure edit rates (Diary and Interview Surveys)

The Expenditure edit rates metric measures the proportion of reported expenditure data that are edited. These edits are changes made to the reported expenditure data during CE data processing, excluding changes due to time period conversion calculations and top-coding or suppression of reported values. Top-coding and suppression are done to protect respondent confidentiality in the public use microdata (PUMD). Additional information on top-coding and suppression is available on the CE Website.

The Interview Survey expenditure edit rates are broken down into three categories: Imputation, Allocation, and Manual Edits:

  • Imputation replaces missing or invalid responses with a valid value.

  • Allocation edits are applied when respondents provide insufficient detail to meet tabulation requirements. For example, if a respondent provides a non-itemized total expenditure report for the category of fuels and utilities, that total amount will be allocated to the target items mentioned by the respondent (such as natural gas and electricity).

  • Manual edits occur whenever responses are directly edited by BLS economists based on their analysis and expert judgment.

The Diary Survey expenditure edit rates are only broken down into two categories: Allocations and Other Edits. Most edits in the Diary Survey are allocations. Table 4.1 below shows the "other edits" category, which covers all other expenditure edits including imputation and manual edits. According to the data in Table 4.1, these data edits are relatively rare.

Beginning in 2022 the BLS changed the way expenditure edit rates are calculated for the Diary Survey data. Changes to the alcohol cost flag are now considered an expenditure edit under the "other edits" category. This change was retroactively applied to the full metric series and has led to comparatively higher estimates for "Other Edits" and lower estimates for "Unedited" compared to previous reports.

Imputation of CE data results from item nonresponse. Allocation is a consequence of responses lacking the required details for items asked by the survey. Lower edit rates are preferred, as it lowers the risk of processing error. However, edits based on sound methodology can improve the completeness of the data, and thereby reduce the risk of measurement error and nonresponse bias in survey estimates. Additional information on expenditure edits is available in the DQP Reference Guide (Armstrong, Jones, Miller, and Pham, 2023).

Diary Survey Summary

  • The total rate of unedited expenditures rose from 88.3 percent in 2020q3 to 91 percent in 2023q2 (Table 4.1).

  • Diary Allocation edit rates fell from 11.6 percent in 2020q3 to 7.3 percent in 2023q2, which is most likely attributable to the consolidation of UCCs in that period (Table 4.1).

  • The rate of Other Edits has fluctuated between 0.1 percent in 2020q3 and 1.8 percent in 2023q2, hitting a series high of 2.2 percent in 2022q4 (Table 4.1).

Table 4.1 Diary Survey: Reported expenditure records
Quarter Number of Expenditures Allocated Other edit Unedited

2020q3

56,071 11.6 0.1 88.3

2020q4

69,959 10.7 0.2 89.2

2021q1

72,138 10.9 0.2 89

2021q2

80,646 11.1 0.3 88.5

2021q3

75,663 11.3 0.5 88.2

2021q4

71,144 10.1 1 88.9

2022q1

82,352 10.1 0.6 89.4

2022q2

79,454 10.5 0.4 89.1

2022q3

83,957 10.9 1.2 87.9

2022q4

74,215 10.3 2.2 87.5

2023q1

81,434 7.1 1.4 91.5

2023q2

79,016 7.3 1.8 91

Interview Survey Summary

  • The total rate of unedited expenditure amount increased 3.6 percent from 83.6 in 2020q4 to 87.2 in 2023q3 (Table 4.2).

  • This was primarily driven by allocation rates declining 3 percentage points from 11.6 in 2020q4 to 8.6 percent in 2023q3 (Table 4.2).

  • Manual edits rates varied little between 2020q4 and 2023q2, increasing just 0.2 percentage points from 0.3 percent to 0.5 percent, before decreasing to 0.2 percent in 2023q3 (Table 4.1).

Table 4.2 Interview Survey: Reported expenditure records
Quarter Number of Expenditures Allocated Imputed Imputed & Allocated Manual Edit Unedited

2020q4

232,195 11.6 4.3 0.2 0.3 83.6

2021q1

231,850 11.2 3.9 0.2 0.6 84

2021q2

232,282 10.1 4.5 0.2 0.2 85

2021q3

231,351 10.1 4 0.2 0.5 85.2

2021q4

222,027 9.8 3.7 0.2 0.6 85.7

2022q1

231,495 9.4 3.6 0.2 0.5 86.4

2022q2

229,608 9.3 3.8 0.2 0.5 86.3

2022q3

215,674 9.2 3.7 0.1 0.5 86.5

2022q4

213,369 9.1 3.7 0.2 0.4 86.6

2023q1

226,199 8.6 3.5 0.1 0.4 87.3

2023q2

211,813 8.7 3.6 0.1 0.5 87.2

2023q3

215,398 8.6 3.8 0.2 0.2 87.2

5. Income imputation rates (Diary and Interview Surveys)

The Income imputation rates metric describes edits performed on a CU's nonresponse to at least one source of income. This edit is based on three imputation methods, applicable to both CE Surveys:

  1. Model-based imputation: when the respondent mentions receipt of an income source but fails to report the amount.
  2. Bracket response imputation: when the respondent mentions receipt of an income source, but only reports that income as falling within a specified range.
  3. All valid blank (AVB) conversion: when the respondent reports no receipt of income from any source, but the CE imputes receipt from at least one source.

After imputation, income from each component source is summed to compute total income before taxes for the CU as a whole. In the following text, income before taxes is defined as “unimputed income” if no source of total income required imputation for one of the three reasons identified above. As stated, this applies to both the Diary and Interview Surveys.

The need for imputation arises from either item nonresponse or from response with insufficient detail was provided (e.g., providing a range of income like “between $40,000 and $50,000,”). Higher response rates for actual values are associated with less measurement error in the data for imputation to address. However, imputation based on sound methodology produces a complete dataset, and reduces the risk of nonresponse bias since incomplete cases are no longer dropped from the dataset. Further details on the income imputation methodology can be found in the DQP Reference Guide (Armstrong, Jones, Miller, and Pham, 2023) and the User's Guide to Income Imputation in the CE (Paulin, Reyes-Morales, and Fisher, 2018).

Diary Survey Summary

  • The rate of unimputed total income before taxes increased 4 percentage points, from 53.1 percent in 2020q3 to 57.1 percent in 2023q2 (Table 5.1).

  • Increases in unedited rates coincided with decreases in bracket imputation, which were 18.1 percent in 2020q3, peaked at 19.3 percent in 2021q3, and fell to a low of 16.8 percent in 2023q1, rebounding slightly, to 17.0 percent in 2023q2.Model-based imputation fluctuated between 18.1 percent in 2022q4 and 22.4 percent in 2021q4, after starting at 19.5 percent and ending the time series at 18.7 percent (Table 5.1).

  • AVB imputation rates have decreased to 2.1 percent in 2023q2 after reaching a series high of 3.9 percent in 2022q4 (Table 5.1).

  • While still down from 6.7 percent in 2020q3, both model and bracket imputation rates have increased slightly to 5.2 percent from the series low of 3.7 percent in 2022q4 (Table 5.1).

Table 5.1 Diary Survey: income imputation rates for total amount of family income before taxes
Quarter Number of respondents Valid blanks converted (AVB) Bracket imputation Model imputation Model & bracket imputation Unedited

2020q3

2,559 2.6 18.1 19.5 6.7 53.1

2020q4

2,835 1.9 18.9 19.9 6.0 53.3

2021q1

2,952 2.0 18.7 18.4 5.6 55.2

2021q2

3,224 2.1 17.5 19.9 5.6 54.9

2021q3

3,027 2.5 19.3 18.4 5.3 54.5

2021q4

2,864 2.4 17.8 22.4 4.6 52.8

2022q1

3,357 2.3 19.0 19.5 4.5 54.7

2022q2

3,239 2.3 18.7 18.9 4.4 55.8

2022q3

3,248 1.8 17.6 19.4 6.1 55.1

2022q4

2,865 3.9 17.6 18.1 3.7 56.6

2023q1

3,162 2.4 16.8 18.8 4.6 57.4

2023q2

3,095 2.1 17.0 18.7 5.2 57.1

Interview Survey Summary

  • The rate of unimputed total income before taxes increased from 54.7 percent to 59.0 percent from 2020q4 to 2023q3, peaking in 2023q1 at 59.5 percent before declining half a percentage point (Table 5.2).

  • Model-based imputation rates hit a series low of 16.4 percent in 2023q1 before increasing to 17.5 percent in 2023q3 (Table 5.2).

  • The rate of model and bracket imputations was 4.7 percent in 2023q3, which is a decrease of 0.8 percentage points from 2020q4 (Table 5.2).

Table 5.2 Interview Survey: Income imputation rates for total amount of family income before taxes
Quarter Number of respondents Valid blanks converted (AVB) Bracket imputation Model imputation Model & bracket Unedited

2020q4

5,205 1.3 18.2 20.3 5.5 54.7

2021q1

5,115 1.4 17.8 19.9 5.5 55.5

2021q2

5,196 1.3 17.4 20.5 5.8 55.0

2021q3

5,121 1.2 18.1 19.7 5.4 55.5

2021q4

4,902 1.4 17.1 18.6 5.3 57.5

2022q1

5,187 1.3 17.8 17.9 5.2 57.8

2022q2

5,177 1.4 17.0 18.3 5.4 58.0

2022q3

4,580 1.1 17.9 17.4 5.3 58.3

2022q4

4,617 1.0 18.3 17.7 4.9 58.1

2023q1

4,807 1.1 18.5 16.4 4.4 59.5

2023q2

4,751 1.2 17.6 17.2 4.7 59.3

2023q3

4,770 1.1 17.7 17.5 4.7 59.0

6. Respondent burden (Interview Survey)

Respondent burden in the Interview survey relates to the perceived level of effort exerted by respondents in answering the survey question. Survey designers are concerned about respondent burden as it has the potential to negatively impact response rates and overall response quality. Beginning in April 2017, the Interview Survey introduced a respondent burden question with response options describing five different levels of burden at the end of the Wave 4 interview. The respondent burden metric is derived from this question and maps five burden categories to three metric values: not burdensome, some burden, and very burdensome. Please see the DQP Reference Guide (Armstrong, Jones, Miller, and Pham, 2023) for more details on the question wording and the burden categories.

A caveat to the interpretation of this metric is that since the burden question is only asked at the end of Wave 4, the metric may underestimate survey burden due to self-selection bias. That is, respondents who have agreed to participate in the final wave of the survey presumably find the survey less burdensome than sample units who had dropped out at any point prior to completing the final survey wave.

However, it is also possible that the respondent answering this question did not participate in prior interview waves. For example, the respondent who participated in the first three survey waves might move out of the sampled address prior to the final interview. This is not a common occurrence, but if someone else moves into the sampled address in time for the final wave, then they would be asked these questions.

Interview Survey Summary

  • After reaching a series low of 24.2 percent in 2021q4, the percentage of respondents who reported perceiving no burden increased to 29.3 percent in 2023q3 (Table 6.1).

  • The percentage of respondents who report perceiving some burden reached a series high of 57.9 percent in 2021q4 before declining to 54.0 percent in 2023q3 (Table 6.1).

  • The percentage of respondents who report that the survey was very burdensome decreased slightly from 14.9 percent in 2020q4 to 14.5 percent in 2023q3 (Table 6.1).

  • There has been little variation in burden response in recent quarters with the rate of missing response hitting 2.1 percent in 2023q2 (Table 6.1).

Table 6.1 Interview Survey: Respondents’ perceived burden in the final survey wave
Quarter Number of respondents Not burdensome Some burden Very burdensome Missing response

2020q4

1,386 29.7 53.5 14.9 1.9

2021q1

1,350 26.0 55.0 15.6 3.4

2021q2

1,337 29.0 55.8 12.3 2.9

2021q3

1,281 27.9 53.9 15.4 2.7

2021q4

1,223 24.2 57.9 15.3 2.6

2022q1

1,289 26.3 55.2 16.3 2.2

2022q2

1,320 28.4 54.7 14.6 2.3

2022q3

1,150 27.1 57.1 13.4 2.3

2022q4

1,125 28.3 54.8 15.0 1.9

2023q1

1,119 28.7 55.5 13.1 2.7

2023q2

1,119 28.2 55.9 13.6 2.4

2023q3

1,210 29.3 54.0 14.5 2.1

7. Survey mode (Diary and Interview Surveys)

These metrics measure the mode of data collection for the Diary and the Interview Surveys.

In the Diary Survey, the mode of data collection is two dimensional. The first is how data about the household (e.g., household size, demographics characteristics, income and assets, etc.) were collected by the Census Field Representative (i.e., mostly in-person or mostly over the phone). The second is the diary type used by respondents when entering expenses during the diary keeping period (i.e., online or paper). Until recently, the Diary Survey was administered strictly in paper form. As part of the redesign effort, the CE program introduced a new online diary mode.[11] This new mode prompted the inclusion of a quality metric that tracks the mode of diary the respondent at the time of placement. It should be noted that while the online diary became available in July 2020 as a supplemental data collection tool during the onset of the COVID-19 pandemic, it was not officially implemented into CE production until July 2022.

The Interview Survey was designed to be conducted in-person. However, the interviewer can also collect data over the phone, or by a combination of the two modes. Higher rates of in-person data collection are preferred since the interviewer can actively prompt the respondent, as well as encourage the use of recall aids, thereby reducing the risk of measurement error. Conducting first wave interviews in-person is especially important as this is typically the respondent’s first exposure to the survey. This serves as an opportunity for the Census FR to build rapport with the household. As a data quality measure, the BLS has inter-agency agreements with the Census Bureau stipulating that the average CE telephone interview rate, defined as interviews in which 50 percent or more of the survey sections are completed by telephone, should be limited to 25 percent for Wave 1 interviews, and 50 percent for all subsequent interview waves, or as feasible, to be determined by factors that may restrict or limit in-person interviewing in selected geographies. More information on how we calculate the mode metrics is available in the DQP Reference Guide (Armstrong, Jones, Miller, and Pham, 2023).

Diary Survey Mode Summary

  • The rate of in-person collection for diary household data varied little in the three most recent quarters, dropping from 69.9 percent in 2022q4 to 69.4 percent in 2023q1, before rising to 70.8 percent in 2023q2 (Table 7.1).

  • Over the past five quarters of available data, in-person collection has fluctuated between 69.4 and 70.8 percent (Table 7.1).

  • This recent period of low variance in in-person data collection rates followed a two-year cumulative increase of 68.7 percentage points between 2020q2 to 2022q2, which occurred after a drop to near zero in 2020q2 during the onset of the COVID-19 pandemic (Graph 7.1). Despite the growth of in-person collection, rates have not returned to pre-pandemic levels (Graph 7.1).

Table 7.1 Diary Survey: Interview Phase Mode
Quarter Number of Diary Cases In-Person Telephone Missing

2020q3

2,559 24.5 73.6 1.9

2020q4

2,835 43.8 53.1 3.1

2021q1

2,952 46.5 50.3 3.1

2021q2

3,224 59.6 36.7 3.7

2021q3

3,027 64.6 32.9 2.5

2021q4

2,864 60.8 32.8 6.4

2022q1

3,357 63.1 32.7 4.3

2022q2

3,239 69.6 25.0 5.4

2022q3

3,248 69.5 26.5 4.0

2022q4

2,865 69.9 26.0 4.1

2023q1

3,162 69.4 26.4 4.1

2023q2

3,095 70.8 25.4 3.8

Expenditure Diary Survey Mode Summary

  • In the two most recent quarters of available data, the proportion of paper diaries varied little (Table 7.2).

  • The proportion of online diaries experienced little variation, rising from 25.4 percent in 2022q4 to 27.8 percent in 2023q1, before dropping to 26.3 percent in 2023q2 (Table 7.2).

Table 7.2 Diary Survey: Expenditure diary survey mode
Quarter Number of Diary Cases Paper Online Missing

2020q3

2,559 66.3 33.1 0.6

2020q4

2,835 71.3 26.8 1.9

2021q1

2,952 71.2 27.1 1.6

2021q2

3,224 70.8 27.1 2.1

2021q3

3,027 70.5 27.9 1.6

2021q4

2,864 69.6 26 4.3

2022q1

3,357 69.1 27.8 3.2

2022q2

3,239 68.4 28.7 2.9

2022q3

3,248 71.4 25.5 3.1

2022q4

2,865 71.4 25.4 3.2

2023q1

3,162 69.9 27.8 2.3

2023q2

3,095 71.1 26.3 2.6

Interview Survey Summary

  • The rate of in-person interviews, across all waves, rose in the two most recent quarters from 37.5 percent in 2023q1 to 39.7 percent in 2023q2, and again to 40.4 percent in 2023q3 (Table 7.3).

  • This continues the trend of short run fluctuations in the proportion of in-person interviews, following the drop to near zero that occurred during the onset of the COVID-19 pandemic in early 2020 (Graph 7.3).

  • While rates of in-person interviews have improved in all waves, it is evident that Wave 1 experienced the greatest growth since the drop to near zero during the onset of the COVID-19 pandemic in early 2020 (Graph 7.4).

  • After reaching the aforementioned low in early 2020, the rate of Wave 1 in-person interviews increased to 50.3 percent in 2022q2 and has since hovered between 52.5 and 46.8 percent (Graph 7.4).

  • In the two most recent quarters, the rate of Wave 1 in-person interviews varied slightly, rising to 52.5 percent in 2023q2 from 49.1 percent in 2023q1, before dropping to 49.8 percent in 2023q3 (Table 7.4).

  • Rates of in-person interviews in Waves 2 and 3 experienced increases in the two most recent quarters of data, rising from 34.3 percent in 2023q1 to 35.8 percent in 2023q2, and again to 38.7 percent in 2023q3 (Table 7.4).

  • Wave 4 in-person interview rates also saw growth in the two most recent quarters, increasing from 31.0 percent in 2023q1 to 33.4 percent in 2023q2 before rising again to 34.1 percent in 2023q3 (Table 7.4).

Table 7.3 Interview Survey: Survey mode
Quarter Number of respondents In-person Telephone Missing

2020q4

5,205 19.5 80.3 0.2

2021q1

5,115 18.1 81.6 0.3

2021q2

5,196 26.3 73.4 0.3

2021q3

5,121 31.8 67.8 0.4

2021q4

4,902 30.7 69.0 0.3

2022q1

5,187 31.0 68.8 0.2

2022q2

5,177 36.4 63.0 0.5

2022q3

4,580 35.9 62.9 1.1

2022q4

4,617 35.3 63.3 1.5

2023q1

4,807 37.5 62.0 0.6

2023q2

4,751 39.7 59.8 0.5

2023q3

4,770 40.4 59.1 0.5
Table 7.4 Interview Survey: In-Person Interviews
Quarter Number of respondents Wave 1 Waves 2 - 3 Wave 4

2020q4

5,205 28.9 17.6 14.6

2021q1

5,115 28.7 15.9 12.2

2021q2

5,196 36.7 24.0 20.5

2021q3

5,121 46.1 28.1 24.0

2021q4

4,902 42.6 27.6 25.0

2022q1

5,187 42.1 28.5 24.2

2022q2

5,177 50.3 32.1 30.8

2022q3

4,580 49.3 32.1 28.3

2022q4

4,617 46.8 32.5 28.2

2023q1

4,807 49.1 34.3 31.0

2023q2

4,751 52.5 35.8 33.4

2023q3

4,770 49.8 38.7 34.1

8. Survey Response Time (Diary and Interview Surveys)

In both the Interview and Diary Surveys, survey response time is defined as the number of minutes needed to complete an interview. For the Diary Survey, the survey response time metric is the median number of minutes to complete the personal interview component that collects household information on income and demographics. For the Interview Survey, the survey response time metric is the median number of minutes to complete the interview. In the Interview Survey, Wave 1 and 4 interviews are typically longer because they collect additional information, mainly household demographics (Wave 1) and assets and liabilities (Wave 4). Survey response time in CE is considered an indicator for objective respondent burden. Presumably, the longer the time needed to complete the survey, the more burdensome the survey. Past internal CE research has found that higher respondent burden negatively affects both response rates and data quality. However, survey response time could also reflect the respondent’s degree of engagement. Engaged and conscientious respondents might take longer to complete the survey because they report more thoroughly or use records more extensively. Regardless, tracking the median survey response time can be useful for assessing the effect of changes in the survey design.

Diary Survey Summary

  • Median Diary Survey response time was 0.2 minutes less in the final quarter studied than the first, changing from 34.9 in 2020q3 to 34.7 in 2023q2; but this metric experienced some variation throughout the period (Table 8.1).

  • Median response time fluctuated between 32.4 and 35.1 minutes from 2020q3 to 2022q2, before jumping to 38.0 minutes in 2022q3, and then falling back to 34.4 in 2022q4 (Graph 8.1).

  • In the two most recent quarters of available data, median response time varied little, rising from 34.4 minutes in 2022q4 to 35.3 minutes in 2023q1, and then dropping to 34.7 minutes in 2023q2 (Table 8.1).

Table 8.1 Diary Survey: Median length of time to complete the interview components (income and demographics)
Quarter Number of Diary Cases Minutes

2020q3

2,559 34.9

2020q4

2,835 32.7

2021q1

2,952 32.7

2021q2

3,224 32.9

2021q3

3,027 32.4

2021q4

2,864 34.9

2022q1

3,357 34.4

2022q2

3,239 35.1

2022q3

3,248 38.0

2022q4

2,865 34.4

2023q1

3,162 35.3

2023q2

3,093 34.7

Interview Survey Summary

  • Median time for Wave 1 interviews fluctuated over the past three years between a low of 74.4 and a high of 88.5 minutes (Table 8.2).

  • In the last three years, median time to complete Waves 2 and 3 interviews ranged between 54.6 and 62.5 minutes (Table 8.2).

  • Median time for Wave 4 interviews ranged between 60.0 and 69.5 over the past three years (Table 8.2).

  • In 2022q3 median interview times rose above the normal range for all waves, following the implementation of Computer Assisted Recorded Interviewing (CARI). This was expected as a similar jump in median time occured during the pretest of the CARI consent question for 4th wave interview participants in 2021q4 (Table 8.2).

  • Wave 1 and Wave 4 median interview times have decreased in the three quarters following the initial implementation of CARI, but rose again in the most recent quarter to 84.8 and 67.1 minutes, respectively.

  • Median Wave 2 and 3 interview times continued to increase in the two quarters following the implementation of CARI, but fell in each of the two most recent quarters of available data (Table 8.2).

Table 8.2 Interview Survey: Median length of time to complete survey
Quarter Number of respondents Wave 1 Waves 2 & 3 Wave 4

2020q4

5,205 75.0 56.2 60.4

2021q1

5,115 74.4 54.6 61.7

2021q2

5,196 76.7 54.6 58.8

2021q3

5,121 78.0 54.6 60.0

2021q4

4,902 80.2 57.8 69.5

2022q1

5,187 79.6 57.7 62.8

2022q2

5,177 79.2 57.7 63.1

2022q3

4,580 88.5 61.8 69.2

2022q4

4,617 84.1 62.0 68.2

2023q1

4,807 83.9 62.5 67.7

2023q2

4,751 81.0 60.0 65.6

2023q3

4,770 84.8 59.7 67.1

Summary

BLS is committed to producing data that are consistently of high statistical quality. As part of that commitment, BLS publishes the DQP and its accompanying Reference Guide (Armstrong, Jones, Miller, and Pham, 2023) to assist data users as they evaluate CE data quality metrics and judge whether CE data fit their needs. DQP metrics therefore cover both the Interview and Diary Surveys, multiple dimensions of data quality, and several stages of the survey lifecycle. Additionally, BLS uses these metrics internally to identify areas for potential survey improvement, evaluate the effects of survey changes, and to monitor the health of the surveys.

Diary Survey response rates recovered somewhat from the historical low of 26.1 percent in early 2020, but in recent quarters have continued the overall downward trend that was evident prior to the onset of the COVID-19 pandemic. Response rates in the Interview Survey experienced a noticeable drop in recent quarters, which was partially attributed to cost-saving measures employed by Census during that time period, but have also continued a general trend of decline. While the external shock of the pandemic did not have as negative of an impact on Interview Survey response rates as it did on the Diary in early 2020, the overall decline from the pre-pandemic period to the present has been greater for the Interview Survey.

Despite the downward trend in survey response rates, quality metric trends relating to the administration and processing of the CE Surveys have yielded more encouraging results over the past few years of available data. In particular, the rates of records use in the Interview Survey, CE Information Booklet use in both CE Surveys, and in-person Wave 1 interviews have all trended upward over the past several years.

With respect to survey processing, the percentages of allocations in the Interview Survey and the Diary Survey expenditure edits have continued to trend downward. The rate of income imputations experienced little variation for both CE Surveys over the period covered, although both have experienced a slight upward trend in the percentage of unedited cases.

Regarding the respondent experience with the survey, median Interview Survey times have continued an overall upward trend in recent years. Theoretically, this should lead to an increase in survey burden, but respondents’ self-reported levels of burden have seen little variation.

BLS will continue to monitor these trends, and the next issue of the CE Data Quality Profile will be released in September of 2024 alongside the BLS’s annual release of 2023 CE data. That report will feature CE Diary Survey data through 2023q4 and CE Interview Survey data through 2024q1.

References

Abdirizak, S., L. Erhard, Y. Lee, and B. McBride (2017). Enhancing Data Quality Using Expenditure Records. Paper Presented at the Annual Conference of the American Association for Public Opinion Research, New Orleans, LA.

Armstrong, G., G. Jones, T. Miller, and S. Pham (2023). CE Data Quality Profile Reference Guide. Program Report Series, the Consumer Expenditure Surveys. Bureau of Labor Statistics.

Ash, S., B. Nix, and B. Steinberg (2022). Report on Nonresponse Bias during the COVID-19 Period for the Consumer Expenditures Interview Survey. Published as part of the Consumer Expenditure Surveys Program Report Series. Bureau of Labor Statistics.

Elkin, I., B. McBride, and B. Steinberg (2018). Results from the Incentives Field Test for the Consumer Expenditure Survey Interview Survey. Published as part of the Consumer Expenditure Surveys Program Report Series. Bureau of Labor Statistics.

Fricker, S., J. Gonzalez, and L. Tan. (2011). Are you burdened? Let's find out. Paper Presented at the Annual Conference of the American Association for Public Opinion Research, Phoenix, AZ.

Krishnamurty, P., G. Jones, and B. McBride (2021). Large Scale Feasibility Test Final Report. Published as part of the Consumer Expenditure Surveys Program Report Series. Bureau of Labor Statistics.

Paulin, G., S. Reyes-Morales, and J. Fisher (2018). User's Guide to Income Imputation in the CE. U.S. Bureau of Labor Statistics.

Wilson, T. J. (2017). The Impact of Record Use in the CE Interview Survey. CE Survey Methods Symposium. Bureau of Labor Statistics.

Footnotes


[1] The Office of Management and Budget has oversight over all Federal surveys and provides the rules under which they operate.  See the Federal Register notice for more details.

[2] Standard errors are also available in the CE LABSTAT database, as of 2022.
[3] Instructions on using the CE PUMD to create variables and flags for quality analysis can be found in the CE PUMD Getting Started Guide.
[4] More information may be found on the CE Frequently Asked Questions (FAQ) page.
[5] Universal Classification Codes (UCCs) are used by the BLS for the specific classification of individual expenditures reported by respondents.
[6] The most recent data quality report, was published in September of 2023, and is also available on the BLS public website.
[7] In the 2022 DQP Annual report, final disposition rates were compared to 2021q1. Upon further examination, the analysis team concluded that comparing to 2020q4 would have been best as it is an earlier quarter, and all disposition rate categories are comparable across 2020q4 and 2021q1.
[8] The In the Interview Survey, each family in the sample is interviewed every 3 months over four calendar quarters. These interviews are commonly referred to as waves. For more information on survey administration please see the CE handbook of methods.
[9] In Graph 2.1, records use temporarily rose in 2016 for Wave 1 respondents. This is likely a result of a field test conducted in that year that gave a subset of respondents monetary incentives (Elkin, McBride, and Steinberg, 2018) to use records.
[10] The This “Did not use” category does not include records where there was no Information Booklet available.
[11] The Gemini Project was launched to research and develop a redesign of the Consumer Expenditure (CE) surveys, addressing issues of measurement error and respondent burden.

Last Modified Date: May 3, 2024