Department of Labor Logo United States Department of Labor
Dot gov

The .gov means it's official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you're on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Article
November 2017

Benchmarking the Current Employment Statistics state and area estimates

The Current Employment Statistics survey gathers detailed estimates on employment, hours, and earnings for the nation, states, Puerto Rico, the U.S. Virgin Islands, and more than 450 metropolitan statistical areas. This article focuses directly on the current methods used to benchmark state and metropolitan area data and is one of four series articles on benchmarking.

The Current Employment Statistics (CES) survey, conducted by the U.S. Bureau of Labor Statistics (BLS), is a monthly survey of approximately 147,000 business and government agencies, representing 634,000 individual worksites. The CES program provides detailed estimates on employment, hours, and earnings for the nation, states, Puerto Rico, the U.S. Virgin Islands, and more than 450 metropolitan statistical areas (MSAs). The CES survey is widely considered one of the most timely and accurate economic indicators published by the federal government.

Annually, the CES program benchmarks, or reanchors, the sample-based employment estimates for March of each year to the universe counts derived principally from the Quarterly Census of Employment and Wages (QCEW) program. This benchmark process is done to account for survey and other errors that accumulate over the year. The QCEW counts are much less timely than sample-based estimates and provide an annual point-in-time census for employment.

The current research agenda for the CES program includes examining alternative benchmarking approaches for national, state, and metropolitan area data. As background for that examination, this article discusses the current methods used to benchmark state and metropolitan area data and is a companion piece to three other articles on benchmarking in the CES program. In the first of the three articles, Christopher Manning and John Stewart discuss the current benchmark methods and results for national CES data. In the second article, Kenneth Robertson presents the conceptual underpinnings of benchmarking and a high-level overview of recent research. Finally, in the third article, Mark Loewenstein and Matthew Dey author a more technical article in which they discuss the alternative method that has shown the most promise.

Establishing benchmark levels and the postbenchmark period

With each annual benchmark, the standard practice for state and area series is to revise 20 months of not seasonally adjusted data before the normal monthly estimation processes begin on the new levels. For example, with the development of the March 2016 benchmark, levels were reestablished for the April 2015 through November 2016 reference months. December 2016 final and January 2017 preliminary estimates were developed based on those newly established levels.

A snapshot of the QCEW is the starting point for building the CES March benchmark level. Added to that level is an accounting of employment that is covered under the CES definition, but not by state unemployment insurance (UI) tax laws. This second segment of employment is called noncovered employment and is only present in select industries.1

The total of the QCEW and noncovered employment (referred to as the population) replaces March sample counts for all national, state, and metropolitan series. The benchmark method differs between the state and metropolitan series and the national series. For the state and metropolitan area series, the employment levels for all months, from the April before the March benchmark through the September after the March benchmark, are also replaced by these population counts. For the national series, only the March benchmark month has its employment replaced by the population counts. The CES program staff adjusts the monthly levels before the benchmark month by applying a linear wedge that is based on the revision to the March level. Months subsequent to March are sample-based estimates from the new March benchmark level.2

CES staff generates October and November estimates as sample-based estimates linking from the newly established September level. Note that the original postbenchmark population values of April through September are updated again with newer versions of the QCEW and noncovered employment counts are updated with the next benchmark cycle. As given in the example at the beginning of this section, April 2016 through September 2016 employment data are replaced once as part of the March 2016 benchmark process. The April 2016 through September 2016 employment data are replaced again a year later as part of the March 2017 benchmark process.

In addition to replacing the estimates, BLS also addresses some administrative issues in the QCEW. The presence of noneconomic code changes (NECCs) is one such issue. As an example, each year approximately one-third of all establishments in the QCEW are contacted as part of the Annual Refiling Survey. State Labor Market Information agencies contact companies and ask them to verify their NAICS (North American Industry Classification System) industry classification, ownership, and location. These NECC corrections are implemented in the QCEW with first-quarter data. The effect of changes that represent less than 6 percent of the employment in a series is distributed across 12 months. For changes of 6 percent or more, CES staff will lengthen the number of months that the employment change is distributed across.

Reasons behind the current method

In general, small geographic areas and industries have fewer sample units, which leads to more variability in the estimates. Although error is associated with the administrative data that are based on the QCEW, BLS decided more than 30 years ago that the error in the administrative data was a better alternative to preserving the variability of the estimates as the final benchmarked time series.

Although the sample sizes from when the current benchmarking method was established are not readily available, the current sample sizes in cells illustrate the frequency of series with small sample sizes. CES estimation occurs at the most detailed levels, referred to as basic cells, and higher level industry totals are the sum of these basic cells. The CES program models more than 55 percent of the slightly more than 6,000 basic series, rather than using only the sample to estimate the data. It also models series when the sample is deemed too small,3 which is generally when the sample has fewer than 30 companies (as defined by a distinct UI account). The CES models combine a sample-based estimate with a trend component to smooth the data.4

Complications affecting the current method

Research by the Dallas Federal Reserve has shown that CES benchmarked population data display a seasonal pattern different from the sample-based estimates.5 The CES program accounts for the differences in the patterns by using a two-step seasonal adjustment process to develop the final seasonally adjusted series. However, BLS seasonally adjusts only about 2,000 state and metropolitan series compared with the 17,000 not seasonally adjusted series. Analyzing employment changes can be complicated, particularly over-the-year comparisons, without seasonally adjusted data. These changes cross over the splice point between population- and sample-based data.6

The seasonal differences also complicate interpreting revisions associated with the benchmark process. Tracking the population and sample differences nationally shows that typically the QCEW data grow by approximately a quarter million more than the CES sample-based data from March through September each year. The difference grows greater through the fourth quarter, and then the population data have a much larger seasonal decrease in the first quarter.7 As a result, QCEW data grow by approximately a quarter of a million less than CES sample-based data from September through March each year. These differences may represent administrative error or reporting error rather than true economic differences.8

As mentioned, state and metropolitan data are replaced with population data through September with each benchmark. Historically, BLS has highlighted the March revision as the primary indicator of the accuracy of the CES survey. However, given the seasonal differences described previously, in aggregate under the current method, the expected differences in March should average a downward revision of approximately 250,000.

Results

Table 1 provides a summary of March benchmark revisions at the state nonfarm level from 2003 to 2016. Table 2 provides comparable revisions for MSAs.

Table 1. Summary statistics of March benchmark revisions, statewide total nonfarm and supersectors, not seasonally adjusted 2005−16
Variable and supersector20032004200520062007200820092010201120122013201420152016

Variable

Average absolute percent revision statewide total nonfarm

0.60.40.50.50.40.40.90.40.50.70.40.50.40.4

Mean percent revision statewide total nonfarm

−0.20.20. 10.30.0−0.1−0.8−0.10.20.60.30.1(1)−0.1

Standard deviation

0.70.50.60.70.50.50.80.50.60.70.60.60.50.6

Minimum

−1.9−0.9−1.2−0.8−1.5−1.4−3.8−1.3−1.8−1.5−0.7−1.5−1.8−1.6

Maximum

1.41.81.24.21.21.01.11.41.42.22.92.01.30.9

Average absolute percent revision statewide by supersector

Mining and logging

3.85.86.53.43.84.36.07.53.24.73.72.84.24.5

Construction

2.62.42.82.72.22.64.03.63.24.43.13.02.62.3

Manufacturing

1.41.21.31.71.21.32.21.81.41.51.41.21.31.3

Trade, transportation, and utilities

1.00.80.70.50.70.61.61.20.91.11.00.70.60.8

Information

2.52.52.21.92.22.03.32.32.43.22.2.022.63.0

Financial activities

1.71.01.20.91.11.01.61.81.92.21.62.01.92.3

Professional and business services

2.11.91.72.11.51.32.22.21.81.91.81.61.61.4

Education and health services

1.01.10.60.90.70.80.81.00.91.41.60.90.90.8

Leisure and hospitality

1.31.41.41.21.10.91.71.81.92.31.41.41.41.5

Other services

2.12.01.91.71.51.31.91.92.42.72.12.42.12.4

Government (2)

0.80.70.60.70.50.60.60.80.71.00.70.90.70.5

Notes:

(1) Less than 0.05.

(2) Includes federal, state, and local governments.

Source: U.S. Bureau of Labor Statistics.

Table 2. Summary statistics of March benchmark revisions, MSAs, not seasonally adjusted 2005−16
Variable200520062007200820092010201120122013201420152016

All MSAs

Number of MSAs367367318319378381381381372258387387
Average absolute percent revision1.11.10.91.01.81.11.11.61.21.11.01.1
Mean(1)0.2−0.2­−0.3−1.40.00.10.40.40.3−0.3−0.2
Standard deviation1.61.51.21.31.91.51.52.11.61.51.31.5

Less than 100,000 employment

Number of MSAs178177117118183188187182181132188189
Average absolute percent revision1.31.31.11.32.11.41.42.01.41.41.21.3
Mean−0.2−0.20.2−0.4−1.5−0.2−0.10.40.30.30.5−0.5
Standard deviation1.71.81.41.72.31.91.82.61.81.81.61.8

100,000 to 499,999 employment

Number of MSAs140141144144138139138141140100147146
Average absolute percent revision1.10.40.80.91.60.90.91.51.10.90.80.9
Mean0.20.2−0.2−0.3−1.40.10.20.40.30.4−0.20.1
Standard deviation1.51.41.01.11.51.11.22.01.41.01.01.2

500,000 to 999,999 employment

Number of MSAs252328283130313025112220
Average absolute percent revision0.60.60.70.61.40.70.80.90.80.90.80.6
Mean0.20.50.2−0.3−1.20.30.50.60.60.30.20.3
Standard deviation0.80.71.00.71.00.80.81.01.11.20.90.8

1 million or more employment

Number of MSAs242629292624252626153032
Average absolute percent revision0.70.50.60.50.90.50.70.71.10.70.40.4
Mean0.30.1−0.1−0.3−0.90.30.60.70.80.5−0.10.1
Standard deviation1.00.70.80.80.70.60.70.61.40.70.60.5

Notes:

(1) Less than 0.05.

Note: MSAs = metropolitan statistical areas.

Source: U.S. Bureau of Labor Statistics.

At the state nonfarm level, the average absolute March revisions were 0.5 percent between 2003 and 2016. Over the same period, the largest average absolute percent revisions at the supersector level were in mining and logging (4.3 percent) and construction (3.0 percent). The smallest average absolute percent revisions were in government (0.7 percent); trade, transportation, and utilities (0.9 percent); and education and health services (1.0 percent).

The average mean percent revision over the last 14 years is 0.05 percent. However, one should be cautious when interpreting that number. Because the number does not reflect the size of the individual states, it should not be interpreted as an average revision of the sum of states over the period.

Revisions since the 2005 benchmark for MSAs are included in table 2.9 For comparisons, breakdowns are by size of the MSAs. Across the most recent 13 years, the average absolute revisions have averaged 1.2 percent across all MSAs. As is expected, the larger the MSA, the smaller the absolute percent revisions are. MSAs more than 1 million in employment have averaged an average absolute percent revision of more than 0.6 percent, while MSAs with less than 100,000 employment have averaged slightly more than 1.4 percent.

Conclusions

The current benchmarking method used for state and metropolitan area estimates addresses the concern of excessive variability in the final benchmarked data that might result from estimates produced with small samples. In addition, replacing estimates from April through September following the March benchmark updates the series with the most current population information available. However, replacing the estimates with population values does not directly address many of the issues associated with the differing seasonal patterns between the population and sample data. Current research into alternative benchmarking approaches might yield a method that addresses some of the limits of the present-day benchmark method, while continuing to address concerns related to the volatility of small domain estimates.

Suggested citation:

Kirk Mueller, "Benchmarking the Current Employment Statistics state and area estimates," Monthly Labor Review, U.S. Bureau of Labor Statistics, November 2017, https://doi.org/10.21916/mlr.2017.26

Notes


1 The CES Technical Notes contain detailed information on the industries, such as education, religious organizations, and rail transportation, that qualify as potentially having employment that is not covered under unemployment insurance tax law, https://www.bls.gov/web/empsit/cestn.htm.

2 Before the 2010 benchmark, BLS did not replace all state estimates with population data through the third quarter of the postbenchmark period. The number of states in which estimates were replaced through the third quarter increased through the 2000s before reaching the stage in which all third-quarter estimates were replaced with third-quarter population data. With the exception of the 2011 benchmark, when BLS replaced all state estimates with population data only through second quarter, all state estimates were replaced with population data through the third quarter since the 2010 benchmark.

3 To assist in dealing with the variability in the estimates at detailed level, the CES program uses a “top-down” approach to the estimates. This approach forces the detailed estimates to sum to estimates independently produced at the super-sector level for statewide and large metropolitan statistical areas.

4 The CES uses a small domain model as well as versions of the Fay-Herriot model in estimating smaller domains. For more information, see https://www.bls.gov/opub/hom/pdf/ho mch2.pdf.

5 For more information, see Franklin D. Berger and Keith R. Phillips, “Solving the mystery of the disappearing January blip in state employment data,” Economic Review (Dallas, TX: Federal Reserve Bank of Dallas), April 1994, pp. 5362, http://w ww.dallasfed.org/assets/documents/research/er/1994/er9402d.pdf.

6 The two-step seasonal adjustment process is explained in detail by Stuart Scott, George Stamas, Thomas Sullivan, and Paul Chester, “Seasonal adjustment of hybrid economic time series,” Proceedings of the Section on Survey Research Methods, American Statistical Association, 1994, https://www.bls.gov/osmr/research-papers/1994/st940350.htm.

7 Kenneth W. Robertson, “Benchmarking the Current Employment Statistics survey: the past, present, and future,” Monthly Labor Review, October 2017, https://doi.org/10.21916/mlr.2017.24.

8 Jeffrey A. Groen, “Sources of error in survey and administrative data: the importance of reporting procedures,” Journal of Official Statistics, vol. 28, no. 2, June 2012, pp. 173−198.

9 The 2004 benchmark introduced a large change in the metropolitan statistical area (MSA) definitions that complicated the examination of benchmark revisions. The MSA redefinitions in 2014 affected the comparisons less than the MSA redefinitions in 2004 and are included in the table.


article image
About the Author

Kirk Mueller
mueller.kirk@bls.gov

Kirk Mueller is a division chief in the office of Employment and Unemployment Statistics, U.S. Bureau of Labor Statistics.

close or Esc Key