An official website of the United States government

The U.S. Bureau of Labor Statistics has developed data on how U.S. businesses changed their operations and employment since the onset of the coronavirus pandemic through September 2020. Data are for the 50 states, the District of Columbia, and Puerto Rico. These tabulations, in combination with data collected by current BLS surveys, aid in understanding how businesses responded during the pandemic.

You can send comments or questions on these data to the Business Response Survey (BRS) staff by email.

On This Page

- Methodology
- Definitions
- Sample Design and Selection Procedures
- Questionnaire
- Response Rate
- Data Editing
- Estimation Procedure
- Details about Specific Tabulations
- Precision of Estimates
- Reliability
- Non-Response Adjustment

These data were collected from July 20-through September 30, 2020. The BRS relied on the existing data collection instrument of the BLS QCEW program’s Annual Refiling Survey (ARS). BRS survey responses were solicited via email and printed letters. Responses were collected online using the platform that is consistently relied on by the ARS. This allows for a large, nationally representative sample to be surveyed with minimal financial costs to BLS.

*Establishments.* An individual establishment is generally defined as a single physical location at which one, or predominantly one, type of economic activity is conducted. Most employers covered under the state UI laws operate only one place of business.

*North American Industry Classification System (NAICS) codes.* NAICS codes are the standard used by federal statistical agencies in classifying business establishments for the purpose of collecting, analyzing, and publishing statistical data. Industrial codes are assigned by state agencies to each establishment based on responses to questionnaires where employers indicate their principal product or activity. If an employer conducts different activities at various establishments, separate industrial codes are assigned, to the extent possible, to each establishment.

*Large/small.* For these data, establishments with 2019 annual average employment greater than 499 are considered large.

For the Business Response Survey, BLS selected a stratified sample of approximately 597,000 establishments. The sample was drawn from the establishments included in the BLS Business Register, built from the 2019, fourth quarter QCEW. There are currently 9 million in-scope establishments on the BLS Business Register. The universe of respondents to the QCEW are the 50 States, the District of Columbia, Puerto Rico, and the U.S. Virgin Islands. The primary source of data for these 53 entities are the Quarterly Contribution Reports (QCRs) submitted to State Workforce Agencies (SWAs) by employers subject to State Unemployment Insurance (UI) laws. The QCEW data, which are compiled for each calendar quarter, provide a comprehensive business name and address file with employment and wage information by industry, at the six-digit North American Industry Classification System (NAICS) level, and at the national, State, Metropolitan Statistical Area (MSA), and county levels for employers subject to State UI laws. Similar data for Federal Government employees covered by the Unemployment Compensation for Federal Employees program (UCFE) are also included. The sample excludes Private Households (NAICS 814110); Services for the Elderly and Disabled persons (NAICS 624120 with employment < 2); U.S. Postal Service (NAICS 491110); and unclassified accounts (NAICS 999999).

At the time the survey was being designed, it was already clear that government mandates related to the virus would differ from state to state. Furthermore, it was already clear that different industries and size classes would have different programs targeted towards them. Because of this, it was desired to produce estimates for specific state – industry, state – size class, and industry – size class groups. As a result, the primary survey analysis breakouts that were initially requested by researchers are shown below, noting that all 50 states plus the District of Columbia and Puerto Rico were included:

- State by Special Interest NAICS Sector Categories
- (6 special interest NAICS categories: 23, 31-33, 44-45, 62Alt, 72, Others)

- State by Size Category
- (2 size categories: large, small)

- NAICS Sector Categories by Size Category
- (22 NAICS sector categories; 2 size categories (large, small))

- Industry Size Class
- (9 classes: 1-4, 5-9, 10-19, 20-49, 50-99, 100-249, 250-499, 500-999, 1000+)

Researcher interest was not, and is not, limited to only these analysis breakouts. However, because the breakouts above were the ones initially identified as the most important ones, the sample was designed to achieve a desired precision when estimating specifically for these particular analysis breakouts. Alternatively, the sample was *not* designed to achieve a desired precision when estimating for other analysis breakouts, although in some cases, the desired precision may be achieved anyway.

Sample sufficiency was determined for each survey analysis breakout listed above. For each of these four analysis breakouts, sample sufficiency counts were determined based on estimating proportions to a certain degree of precision, where precision was based on researcher needs weighed versus survey burden and cost. The formula for the sample sufficiency of an estimation cell was based on the deconstruction of the formula for the variance of a proportion (using simple random sampling within the cell).

The four sets of sufficiency counts were then meshed together to create a single unified stratified sample design. The unified design specified a targeted number of responders for each State by two-digit NAICS Sector Category (slightly modified in some cases) by Industry Size Class combination, where each of these combinations equated to a sample stratum. Sample sizes were then derived by multiplying each stratum’s targeted number of responders by an estimate of the survey response rate. The overall sample size was the sum of these individual stratum sample sizes.

Once sample sizes were finalized for each state, NAICS sector category, and size class stratum, samples were selected within each stratum using simple random sampling without replacement.

The BRS asked questions in three areas: 1) business experiences and payroll decisions, 2) worker benefits and their ability to telework, and 3) whether a business received a loan or grant from the government tied to the payroll.

For Questions 1, 2, 5, and 6, establishments could have experienced or made decision about more than one of the situations presented. For example in Question 1, an establishment could have experienced both a shortage in supplies and a government-mandated closure. For these questions, respondents were instructed to mark all of the situations that applied to them.

The total usable response rate for the BRS was 27.2 percent.

Each respondent was eligible for all 7 questions in the survey. A survey was considered completed if at least 4 of the 7 questions had a valid answer (blanks and “don’t know” responses were considered to be invalid answers). Estimates were generated only using completed surveys. Of the approximately 597,000 establishments that received the survey, 27.5 percent responded and 27.2 percent had a completed survey.

In the estimation for specific questions, blanks and “don’t know” responses were treated as a non-response to the question by the establishment and were not included in the estimation for the specific question. Non-response to a specific question was treated as described in the estimation methodology section.

The main survey measures of interest included:

(i) Proportion of establishments possessing an attribute

(ii) Number of establishments possessing an attribute

(iii) Proportion of employees working at establishments that possess an attribute

(iv) Number of employees working at establishments that possess an attribute

Each measure was estimated within each stratum, provided the stratum included at least one usable responder. Strata estimates were then combined to derive composite estimates for various analysis breakouts, e.g., national estimates, state estimates.

For estimation methodology purposes, the primary measure of interest was the estimated proportion of establishments possessing an attribute being assessed by a survey question, e.g. the proportion of establishments that experienced a shortage of supplies or inputs as a result of the coronavirus pandemic. The other estimates were then calculated as functions of these proportions.

Specifically, within-stratum establishment count estimates were calculated as the product of the stratum’s establishment proportion estimate and the stratum’s total establishment population. Similarly, within-stratum employment count estimates were calculated as the product of the stratum’s establishment proportion estimate and the stratum’s total employment.

Within each stratum, for a particular survey question, establishment proportion estimates were calculated over the sample units that:

- Responded to at least 4 of the 7 survey questions
- Responded to that particular survey question with something other than a response of “don’t know”

When estimating stratum-level establishment and employment counts, sample unit weights were adjusted upward to account for both unit and item non-response. For these purposes, don’t know responses were treated as item non-response.

Final composite estimation was achieved in stages:

- Direct Strata Estimation (for strata with at least one usable responder)
- First Pass Composite Estimation (composite estimates over only strata with usable responders)
- Strata Imputation (for strata with no usable responders)
- Second Pass Composite Estimation (incorporated directly-estimated and imputed strata values)

Direct strata estimation was conducted for strata and survey questions for which there was at least one usable responder. From these strata-level results, first-pass composite estimates were produced for establishment proportions (and their variances) for various aggregations of strata, e.g., national, state.

Composite establishment proportion estimates were calculated as weighted sums of strata establishment proportion estimates. Composite estimation weights (i.e. strata weights) were calculated as each stratum’s establishment population proportion relative to the total number of establishments in the composite.

In the first pass through composite estimation, the weighted sum was taken over only those strata for which direct strata estimates could be calculated. Therefore, strata weights were adjusted to account for only those strata contributing to a particular first pass composite estimate.

First pass composite estimates for establishment proportions (and their variances) were then used to impute missing strata-level establishment proportions (and their variances). These imputed strata-level establishment proportions were then used to calculate strata-level estimates for establishment and employment counts.

Finally, composite estimation was re-run using direct strata estimates where possible and imputed strata estimates where necessary. Again, composite proportion estimates were calculated as weighted sums of strata establishment proportion estimates. However, in the second pass through composite estimation, all strata in each composite aggregate contained values (either directly calculated or imputed) and, therefore, strata weights no longer needed to be adjusted for missing strata.

Final (i.e. second pass) composite estimation of establishment and employment counts were calculated as unweighted sums of the relevant strata estimates.

Final composite estimates of employment proportions were calculated as weighted sums of strata establishment proportions, where strata weights were calculated as each stratum’s total employment proportion relative to the total employment in the composite.

Estimates of Employment

The estimates of employment represent the total number of employees working at an establishment for which a particular situation occurred for at least one worker. It is not an estimate of the number of employees who experienced the situation. For example, the employment estimate for “offered telework to employees who could not telework prior to the Coronavirus pandemic” (Question 5) is an estimate of the number of employees who worked at an establishment where at least one worker was offered telework. It is not an estimate of the total number of workers who were offered telework.

Question 2 (Changes made to the payroll or employment)

One of the possible responses to this question was “told employees not to work”. For cross question consistency, the estimates for “told employees not to work” were based on the answers to Question 3. The proportion of establishments that told employees not to work was constructed as the number of establishments answering “Yes” or “No” to Question 3 divided by the number of establishments answering “Yes”, “No”, or “Not Applicable, No Employees Told Not to Work” to Question 3.

Question 3 (Whether paying some employees who were told not to work)

The published estimates of proportions, number of establishments, and employment are conditioned on the establishment telling at least some employees not to work. Establishments that did not have any employees who were told not to work were excluded from the calculations.

Question 4 (Whether paying health insurance premiums for employees told not to work)

Estimates of the proportion of establishments that paid a portion of health insurance premiums for employees told not to work were constructed using information from Question 3 and Question 4. Because it is not possible to estimate the number of establishments that “told employees not to work” from Question 4, this information was derived from Question 3. The proportion of establishments that paid a portion of health insurance premiums for at least some of their workers told not to work was constructed as the number of establishments that answered “Yes” to Question 4 divided by the number of establishments that answered “Yes” or “No” to Question 3.

Question 5 (Offering telework to employees)

Estimates of proportions, number of establishments, and employment combine the two categories of "offered telework to employees who could not telework prior to the Coronavirus pandemic" and "increased number of telework hours for employees already permitted to telework".

Question 6 (Offering paid sick leave to employees)

Estimates of proportions, number of establishments, and employment combine the two categories of "provided paid sick leave to employees who did not have paid sick leave prior to the Coronavirus pandemic" and "increased amount of paid sick leave for employees who already had sick leave prior to the Coronavirus pandemic".

Sampling Error

The BRS estimates are statistical estimates subject to sampling error because they are based on a sample of establishments rather than the entire universe of establishments. Standard errors are provided for the construction of confidence intervals around an estimate and for hypothesis testing. The standard errors were derived using the variances generated according to the methodology outlined in the estimation section.

Rounding

Estimates of employment and the number of establishments are rounded to the nearest integer. Estimates of percentages are rounded to two decimal places.

Variance estimates were calculated for the following survey measures of interest:

(i) Proportion of establishments possessing an attribute

(ii) Number of establishments possessing an attribute

(iii) Proportion of employees working at establishments that possess an attribute

(iv) Number of employees working at establishments that possess an attribute

Each variance was estimated within each stratum, provided the stratum included at least one usable responder. Strata variance estimates were then combined to derive composite variance estimates for various analysis breakouts, e.g., national estimates, state estimates.

For variance estimation methodology purposes, the primary variance of interest was the estimated variance of the proportion of establishments possessing an attribute being assessed by a survey question.

Variance estimation for establishment proportions involved (1) the application of the basic formula for the variance of a proportion drawn from a simple random sample and (2) the application of the general formula for the variance of a composite proportion estimator drawn from a stratified random sample. More specifically, in regards to (2), the composite variance estimator used for establishment proportions was the sum of the product of each stratum’s relevant variance estimate and the square of its stratum weight, where the sum is taken over all strata in the composite.

For any stratum with more than one establishment in the universe but only one item response for a particular survey question, the stratum variance was set to a default value. This was done to avoid setting these variances equal to zero, which could contribute to underestimating composite variance estimates. The default value was equivalent to the variance that would have been realized if the stratum had two responders, with one responding in the affirmative to the attribute being analyzed, and the other responding in the negative.

Stratum-level variance estimates for establishment and employment counts were calculated as functions of the corresponding stratum-level establishment proportion variance estimates. For example, because each stratum-level establishment count estimate was calculated as the product of the stratum-level establishment proportion estimate and the stratum’s total establishment population, the stratum-level establishment count variance estimate was set equal to the stratum-level establishment proportion variance estimate times the square of the stratum’s total establishment population. Stratum-level employment count variance estimates followed the same formulation, except strata employment counts were used instead of strata establishment counts.

Stratum-level variance estimates for employment proportions were set equal to the stratum-level variance estimates for establishment proportions, since employment proportions themselves were set equal to the directly-calculated establishment proportions. Composite variance estimates for employment proportions were calculated using the same formula as for composite variance estimates for establishment proportions, except using employment-based strata weights instead of establishment-based strata weights.

Stata with no usable respondents had their establishment proportion variance estimates imputed in the same way that their corresponding establishment proportion estimates were imputed (using first pass composite estimates).

First pass composite *variance* estimates were subject to the same strata weight adjustments as were first pass composite *proportion* estimates. Similarly, second/final pass composite variance estimates were calculated using unadjusted strata weights because, at that point, all strata had either direct stratum-level variance estimates or imputed stratum-level variance estimates (i.e. there were no missing variance estimates).

The sample design and estimation strategy was to select independent samples within survey strata and then to calculate composite estimates by aggregating across strata results. The sample design stratified on three variables – state, modified NAICS sector, and size class – yielding 52x22x9=10,296 possible survey strata. However, about 10% (1,066) of the possible survey strata contained no establishments. For each of the 9,230 non-empty strata, a sample of at least one establishment was drawn. Of the 9,230 non-empty strata, about 11% (1,032) yielded no usable survey responses, leaving 8,198 strata with at least one usable survey responder.

The numbers in the previous paragraph summarize the effect of unit non-response on strata usability. However, it should be noted that a survey respondent was considered usable if it yielded responses to at least 4 of the 7 survey questions. Therefore, item non-response created situations where a strata had at least one usable response for one question but no usable responses for another question. For example, for Question 1, there were 95 strata that had at least one unit responder, but no usable item responses to Question 1.

As discussed in some detail in the Estimation Procedure section, to accommodate strata with no usable item responses, final composite estimation was achieved in stages:

- Direct Strata Estimation (for strata with at least one usable responder)
- First Pass Composite Estimation (composite estimates over only strata with usable responders)
- Strata Imputation (for strata with no usable responders)
- Second Pass Composite Estimation (incorporates directly-estimated and imputed strata values)

Specifically in regards to the approach to strata imputation itself, survey strata and question combinations that had no usable item responses had their establishment proportions and variances imputed according to the following ordered hierarchy of composite estimates:

(i) State, modified NAICS sector, size class large/small (500+,1-499)

(ii) Census division, modified NAICS sector, size class large/small

(iii) Census region, modified NAICS sector, size class large/small

(iv) Modified NAICS sector, size class large/small

(v) Size class (nine size classes)

For example, for Question 1, suppose a state had no usable responses for NAICS sector 11 and size class 1000+. Further, suppose that for the same question and state there were multiple responses for NAICS sector 11 and size class 500-999. In this case, there *would* be a viable composite estimate for the stratum’s corresponding state, sector, size class large/small composite cell. Therefore, the stratum’s establishment proportion and variance would get imputed from that first composite in the hierarchy.

As another example, for Question 2, suppose a state had no usable responses for NAICS sector 11 for either of size classes 500-999 and 1000+. Further, suppose that other states in the same Census division had multiple responses for NAICS sector 11 and both size classes 500-999 and 1000+. In this case, the first composite in the imputation hierarchy would prove inadequate, but the second composite down the priority list would yield a viable composite estimate and, therefore, would be used for imputation for the stratum.

It is worth noting that the lowest-prioritized composite in the imputation hierarchy – the size class composite – is the fail-safe, since composite estimates existed for all nine size classes for every question.